Upgrade to Pro — share decks privately, control downloads, hide ads and more …

HashiCorp User Hub Thailand #2 - Simplify Proxm...

HashiCorp User Hub Thailand #2 - Simplify Proxmox VM Management with Terraform

Karn Wong

August 20, 2024
Tweet

More Decks by Karn Wong

Other Decks in Technology

Transcript

  1. About me kahnwong Karnsiree Wong karnwong.me Platform Engineer @Data Cafe

    Company Limited Firmly believes that all configurations should be documented as code Loves automation
  2. Table of contents 1. What is Proxmox 2. Creating a

    VM 1. The better way (cloud-init) 2. What if I told you it can be reduced to this 3. Enter Terraform 1. Initialization 2. Upload base image 3. Create cloud-init config 1. Cloud-init dependencies 4. Creating a VM with Terraform 5. VM Configurations
  3. What is Proxmox A virtualization platform Can create Virtual Machines

    and Containers (LXC) Includes management UI Provides monitoring Utilizes noVNC to provide remote access
  4. Creating a VM The normal way 1. Upload ISO to

    Proxmox VE 2. In the UI, create a VM and specify configurations ISO image CPU Memory Disk size 3. Boot a VM and set configurations Username Password Public key Packages Workloads Problems with this setup You have to go through the installation process every time Unless you create a template off existing VM (which is a system image) This means if you have a lot of templates, it would require more of disk space
  5. The better way (cloud-init) However, using cloud-init manually means a

    lot of configurations Notice that you have to provision your own public key Imagine you have 100 VMs to manage and you don’t want to reuse the keys… With cloud-init, you can have configuration templates, no system images required apt-get install cloud-init wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img qm create 9000 --memory 2048 --net0 virtio,bridge=vmbr0 --scsihw virtio-scsi-pci qm set 9000 --scsi0 local-lvm:0,import-from=/path/to/bionic-server-cloudimg-amd64.img qm set 9000 --ide2 local-lvm:cloudinit qm set 9000 --boot order=scsi0 qm set 9000 --serial0 socket --vga serial0 qm template 9000 qm clone 9000 123 --name ubuntu2 qm set 123 --sshkey ~/.ssh/id_rsa.pub qm set 123 --ipconfig0 ip=10.0.10.123/24,gw=10.0.10.1
  6. What if I told you it can be reduced to

    this locals.tf then Obviously you need more terraform resource blocks, but when you have to spin up a VM this would be only thing you have to do locals { vms = { foo = { id = 100 # starts at 100 on_boot = true template = "base" # "docker", "kubernetes" -- cloud-init templates cpu = 1 memory = 512 disk = 8 } } } terraform apply
  7. Enter Terraform Sadly there is no official Proxmox provider But

    these two look promising: Telmate and bpg Telmate has more stars, but it is very buggy and slow, I encountered a lot of issues with it So far I’ve been using bpg for over a year without issues
  8. Initialization terraform { required_version = ">= 1.3.6" required_providers { proxmox

    = { source = "bpg/proxmox" version = "0.62.0" } } } provider "proxmox" { endpoint = var.proxmox_endpoint username = var.proxmox_username password = var.proxmox_password }
  9. Upload base image But I find it very flaky, so

    it’s safer to upload it manually through the web console You can do this with terraform via resource "proxmox_virtual_environment_file" "iso" { content_type = "iso" datastore_id = "local" node_name = var.node_name source_file { path = "https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img" } }
  10. Create cloud-init config This is per VM basis (because you

    need to bake in public key) You also need to enable snippets: Under Datacenter > Storage local , add snippets to storage content type resource "proxmox_virtual_environment_file" "this" { for_each = toset(keys(local.vms)) content_type = "snippets" datastore_id = "local" node_name = var.node_name source_raw { data = templatefile("${path.module}/templates/${local.vms[each.key].template}.cloud-config.yaml.tftpl", { vm_name = each.key public_key = trimspace(tls_private_key.ecdsa[each.key].public_key_openssh) }) file_name = "${each.key}.cloud-config.yaml" } }
  11. Cloud-init dependencies For cloud-init, this is to have multiple pre-defined

    templates for different use cases, and because you shouldn’t reuse public keys Sample cloud-init template for Kubernetes: Notice cloud-init template $x.cloud- config.yaml.tftpl and public key hostname: ${vm_name} package_update: true package_upgrade: true packages: - curl - htop - net-tools - qemu-guest-agent - software-properties-common runcmd: - systemctl start qemu-guest-agent.service - apt clean - apt autoremove - curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server growpart: mode: auto devices: ["/"] users: - default
  12. Creating a VM with Terraform resource "proxmox_virtual_environment_vm" "this" { for_each

    = toset(keys(local.vms)) name = each.key description = "Managed by Terraform" tags = ["terraform", "ubuntu", local.vms[each.key].template] # ... redacted due to space cpu { } memory { } disk { datastore_id = "local-lvm" file_id = "local:iso/jammy-server-cloudimg-amd64.img" interface = "scsi0" size = local.vms[each.key].disk } user_data_file_id = proxmox_virtual_environment_file.this[each.key].id }
  13. VM Configurations This block writes VM’s private key to disk

    so you can use it to SSH into the VM locals { vms = { foo = { id = 100 # starts at 100 on_boot = true template = "base" # "docker", "kubernetes" cpu = 1 memory = 512 disk = 8 } } } resource "local_file" "vm_key" { for_each = toset(keys(local.vms)) content = trimspace(tls_private_key.ecdsa[each.key].private_key_pem) filename = "generated/keys/${each.key}.pem" depends_on = [tls_private_key.ecdsa] }
  14. Conclusion Creating a Proxmox VM is simple, but it requires

    you to configure everything every time Unless you create a system image off it There’s cloud-init, but you need to apply more configurations Terraform can help you reduce steps to spin up a VM Plus you can track VM configurations as code At the end, you only have to create a single map block to specify VM configuration