Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Infrastructure As Code in Multicloud: How To Do?

Infrastructure As Code in Multicloud: How To Do?

Building infrastructure in cloud vendors - depending on need - involves a series of serially executed steps that make the process sometimes complex both to create and maintain.

With the DevOps movement, tools are created to enable the creation of infrastructure with a code to facilitate this process. But where to start?

And, mainly, how to provide redundant infrastructure in not just different datacenters, but different cloud providers?

We will see workstations from a basic configuration of a virtual private network with good practices to provisioning in different clouds using Terraform as a tool.

Marcelo Pinheiro

May 05, 2017
Tweet

More Decks by Marcelo Pinheiro

Other Decks in Technology

Transcript

  1. $ whoami • Fireman / Problem Solver / Programmer since

    2000 • Ruby, Python, Golang, Java, C#, Classic ASP, PHP, Node.js, Erlang and others • Fought, made coffee, negotiated deadlines • DevOps Engineer at
 Work & Co
  2. Cloud Providers: AWS • Cloud Virtual Networking: VPC • Virtual

    Servers: EC2 • Containers: EC2 Container Service • AWS-based solution with custom Docker • Object Store: S3 • DNS: Route53 • Databases: RDS • Amazon Aurora, MySQL, PostgreSQL, Oracle, Microsoft SQL Server • Cache: Elasticache • Redis, Memcached • Computing: Lambda • C#, Javascript, Python, Java • Machine Learning: AWS Machine Learning
  3. Cloud Providers: AWS • Topology: Region / Availability Zone (AZ)

    based • Each region have at least two AZs • Some services run inside Region scope (VPC, S3) • You must choice what AZ will run your instances • Subnets on AZ
  4. Cloud Providers: Azure • Cloud Virtual Networking: Virtual Networks •

    Virtual Servers: Virtual Machines • Containers: Container Services • DC/OS, Docker Swarm, Kubernetes • Object Store: Storage Accounts (Files) • DNS: Azure DNS Zones • Databases: • SQL Databases (SQL Server), MySQL Databases / Cluster, NoSQL Databases (DocumentDB, MongoDB) • Cache: Redis Caches • Computing: Function Apps • C#, F#, Python, Javascript, PowerShell • Machine Learning: Azure Machine Learning
  5. Cloud Providers: Azure • Topology: Region based • No Availability

    Zones • Services runs inside region scope • Subnets on region
  6. Cloud Providers: Google • Cloud Virtual Networking: GCP Networks •

    Virtual Servers: GCE VM Instances • Containers: GCE Container Clusters • Kubernetes • Object Store: Cloud Storage • DNS: Cloud DNS • Databases: Cloud SQL • MySQL, PostgreSQL (beta) • Cache: not available • Computing: Cloud Functions (beta) • Javascript • Machine Learning: ML Engine
  7. Cloud Providers: Google • Topology: Region / Zone based •

    Each region have at least two Zones • You must choice what Zone will run your instances • Subnets on Zone
  8. Virtual Networks: that “default” • AWS and GCE offers a

    “default” VN, ready to go • Good for tests, not recommended for production • Firewall / security groups too open to the world • Subnets with public IP’s by default (AWS)
  9. Virtual Networks: that “default” • What it implies: • Vulnerable

    to attacks • Hard to scale up • Difficult to maintain • A mess for /(dev)?ops/ teams • Without a Infrastructure as Code tool, a nightmare
  10. Custom Virtual Networks: so what? • Virtual Networks have their

    best practices too, *specially* when you use multitier architecture. Correctness provides: • Isolation • Organization • Maintainability • Scalability
  11. Custom Virtual Networks: Best Practices • Bastion is the only

    server accessible via SSH. Period. • All others must be accessed ONLY by the bastion • Create a main security group / firewall to be used by the bastion • Useful when you have VPN’s and other network restrictions • Create Security Groups / Firewall rules for EACH load balancer exposed to the interwebs and another for their instances • For each application stack, of course!
  12. Custom Virtual Networks: Best Practices • Public subnets for: •

    Load Balancers • Application servers • Databases? It depends • Private subnets for: • Load Balancers • Application Servers • Databases
  13. Custom Virtual Networks: Best Practices • High Availability in mind

    • At least 2 servers running in different subnets AND availability zones* • Databases in master / slave if you have $$$ • For PCI-Compliance networks, you can use Network ACL’s to improve inbound / outgoing traffic security (AWS only)
  14. Custom Virtual Networks: How to manage? • You have some

    options: • Common CLI provided by your cloud provider • Configuration Management tools (Ansible, Chef, Puppet, SaltStack etc) • Cloud provider panel
  15. Custom Virtual Networks: How to manage? • But… • Cloud

    Provider CLI is not idempotent • Not all configuration managements are idempotent • Let’s try to create a Virtual Network by hand
  16. Terraform: What • A tool for build, changing and versioning

    infrastructure • Servers • Object store • Virtual Networks • DNS zones • etc • Based on human-readable recipes • Idempotent
  17. Terraform: What • You can combine provisioning with your configuration

    management of choice • Dev, QA, UAT and Production environments can be managed with reusable code • Multicloud support
  18. Terraform: What • Is awesome but have problems, as any

    other tool • Be careful with environments (destroy command) • Some small but annoying bugs • Hard to debug sometimes • Baby steps
  19. Terraform: Basics • Main commands: • terraform init • terraform

    plan • terraform apply • terraform destroy • terraform graph
  20. Terraform: Basics • A recipe must have a provider •

    Providers are the IaaS / SaaS players supported by tool • AWS • Azure • GCE • DNSimple • Digital Ocean • Visit https://www.terraform.io/docs/ providers/index.html
  21. Terraform: Basics • Terraform relies on state files to be

    idempotent • Each execution maps current resources provisioned with modified content in your recipe • You MUST store state files remotely (AWS S3, Azure Storage Account, Google Cloud Storage) • State files can contain sensitive data
  22. Terraform: AWS • AWS EC2 Instance requires • Subnet •

    Security Group ID • Instance Type • Image ID • Key Pair • No intervention on panel to enable provisioning
  23. Terraform: AWS provider "aws" { region = "us-east-1" } resource

    "aws_instance" "bastion" { instance_type = "${var.aws_instance["instance_type"]}" ami = "${var.aws_instance["ami"]}" key_name = "${var.aws_instance["key_name"]}" subnet_id = "${aws_subnet.pub.1.id}" vpc_security_group_ids = [ "${aws_security_group.bastion.id}" ] tags { Name = "SSH Bastion" } depends_on = [ "aws_security_group.bastion" ] }
  24. Terraform: Azure • Azure Virtual Machine is more complex: •

    Storage Account • Storage Container (vhds) • Network Interface • Instance type • OS profile • SSH key-pair (only authorized_keys) • Username / Password • Requires a list of steps on panel to enable provisioning • https://www.terraform.io/docs/providers/azurerm/ index.html
  25. Terraform: Azure provider "azurerm" { } resource "azurerm_storage_account" "storage" {

    name = "terraform" resource_group_name = "${azurerm_resource_group.rg.name}" location = "${azurerm_resource_group.rg.location}" account_type = "${var.azr_storage_account["account_type"]}" tags { VirtualNetwork = "${azurerm_virtual_network.vn.tags.Name}" CreatedBy = "terraform" } } resource "azurerm_storage_container" "vhds" { name = "vhds" resource_group_name = "${azurerm_resource_group.rg.name}" storage_account_name = "${azurerm_storage_account.storage.name}" container_access_type = "private" }
  26. Terraform: Azure resource "azurerm_network_interface" "bastion" { name = "pub_nic_bastion" location

    = "${var.azr_location}" resource_group_name = "${azurerm_resource_group.rg.name}" ip_configuration { name = "pub_nic_bastion" subnet_id = "${element(azurerm_subnet.pub.*.id, 1)}" public_ip_address_id = "${azurerm_public_ip.bastion.id}" private_ip_address_allocation = "dynamic" } }
  27. Terraform: Azure resource "azurerm_virtual_machine" "bastion" { name = "bastion" resource_group_name

    = "${azurerm_resource_group.rg.name}" location = "${azurerm_resource_group.rg.location}" network_interface_ids = [ "${azurerm_network_interface.bastion.id}" ] vm_size = “${var.azr_virtual_machine["vm_size"]}" storage_image_reference { publisher = "${var.azr_virtual_machine["storage_image_reference_publisher"]}" offer = "${var.azr_virtual_machine["storage_image_reference_offer"]}" sku = "${var.azr_virtual_machine["storage_image_reference_sku"]}" version = "${var.azr_virtual_machine["storage_image_reference_version"]}" } storage_os_disk { name = "bastion" vhd_uri = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.vhds.name}/bastion.vhd" caching = "ReadWrite" create_option = "FromImage" } os_profile { computer_name = "bastion" admin_username = "${var.azr_virtual_machine["admin_username"]}" admin_password = "${var.azr_virtual_machine["admin_password"]}" } os_profile_linux_config { disable_password_authentication = false } }
  28. Terraform: Google • Google Compute Engine VM Instance requires •

    Subnet • Firewall • Instance type • Image ID • Local SSH public key • Requires creation of a project to enable provisioning (each project generates a json configuration file)
  29. Terraform: Google provider "google" { credentials = "${file("/root/terraform.json")}" } resource

    "google_compute_instance" "bastion" { name = "bastion" machine_type = "${var.gce_instance["machine_type"]}" zone = "${element(var.gce_subnets["pub_zones"], 1)}" can_ip_forward = true tags = [ "terraform", "ssh" ] disk { image = "${var.gce_instance["disk_image"]}" } network_interface { subnetwork = "${element(google_compute_subnetwork.pub.*.name, 1)}" access_config { nat_ip = "${google_compute_address.bastion.address}" } } metadata { ssh-keys = "ubuntu:${file("/root/.ssh/gce.pub")}" } }
  30. Terraform: Modules • You can isolate common code into a

    folder and use it as a module • Modules provides a standard way to provision resources of any type • What you need is to isolate variables from your Terraform recipe to be used by module as input variables