Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The Many Faces of Elixir Deployment

The Many Faces of Elixir Deployment

A quick look into deploying your Elixir/Phoenix application on Amazon AWS with its EC2 Container Service

Evadne Wu

May 31, 2017
Tweet

More Decks by Evadne Wu

Other Decks in Technology

Transcript

  1. The Many Faces of

    Elixir Deployment
    Evadne Wu

    github.com/evadne
    [email protected]
    last updated

    31 May 2017

    View Slide

  2. View Slide

  3. My Background
    Based in London, previously a consultant
    ➤ Started with JS, C & Objective-C
    Took a more operationally involved role in 2013–2014
    ➤ Learned a whole bunch of other stuff
    Started working with Erlang/Elixir in 2015

    View Slide

  4. Caveat: Advice Applicability
    This presentation consists of solely my personal opinion; any
    resemblance of facts should be considered a mere coincidence.
    Depending on your current scale and cultural inclination, they may
    or may not be applicable to your product.

    View Slide

  5. Yet Another Talk on Deployment…
    What year is it?
    Pull in distillery
    RTFM
    Run “mix release”
    Write a Dockerfile
    Release

    View Slide

  6. View Slide

  7. Details
    There’s lots of prior art on deploying Elixir / Phoenix
    ➤ Dockerisation, OTP Releases, etc. already covered numerous times
    ➤ Distillery is clearly the way forward
    Starting from scratch is a different story
    ➤ Let’s try to build up a decision framework

    View Slide

  8. What I’ll Cover Today
    Theory: design principles and evaluation criteria for your deployment
    Requirements: things that are important to have in your deployment
    Design: How we currently address this issue
    Implementation: SSL/TLS steering, clustering, configuration, etc
    Demo: (if time permits) a full demonstration of what we’re using
    Then try to answer any question

    View Slide

  9. Part 1: Theory

    View Slide

  10. Reality as I Perceive It
    #1: Knowledge Transfer Takes Non-Zero Time
    ➤ Erlang/OTP is a beautiful and pioneering tool
    ➤ Many concepts have found their way outside the Erlang/OTP ecosystem
    ➤ Load balancing, clustering, service discovery, etc
    ➤ Your colleagues probably already know how to use them
    ➤ Key: Your colleagues are knowledgeable: ensure they are useful

    View Slide

  11. Reality as I Perceive It
    #2: Opportunity Cost: Platform vs Product Development
    ➤ You wrote it — You maintain it!
    ➤ Any code written by your team will need to be maintained for the entire
    lifetime of its existence by your team, using your team’s budget
    ➤ Consider opportunity cost of custom platform engineering
    ➤ Seek maximum leverage from commercial off-the-shelf solutions
    ➤ Key: Fully exploit well-known solutions created and maintained by others

    View Slide

  12. View Slide

  13. Reality as I Perceive It
    #3: You’re Probably in Vendor Lock-In Anyway
    ➤ “Portability” is a damned lie; it’s vendor-specific shims all the way down
    ➤ Open Source Languages/Frameworks — Always!
    ➤ Closed Source Third-Party Services — Some? Many? All?
    ➤ Post-porting activities also important: fine-tune, measure, monitor, re-adjust…
    ➤ Might as well design and exploit each platform to their fullest
    ➤ Key: When possible, spend other people’s R&D money

    View Slide

  14. Guiding Opinions
    Use existing non-Erlang/OTP solutions when appropriate
    ➤ Exploit existing knowledge; avoid forced re-learning
    Use as many pre-built solutions as possible
    ➤ Build upon existing solutions by your community
    Properly integrate with your primary platform
    ➤ An existing and adequate solution is still better than nothing
    Also: try to follow https://12factor.net as closely as possible

    View Slide

  15. Part 2: Requirements

    View Slide

  16. Infrastructure Design Goals
    There are only 2 metrics you must not compromise:
    ➤ Minimum hours spent away from work per day, per team member
    ➤ Number of involuntary interventions per week/month/year
    Do not optimise for “developer happiness” or “developer productivity”
    ➤ These are by-products of a system correctly designed for perpetual operation
    ➤ Focus on operational stability and sustainability instead

    View Slide

  17. 2012: Instagram
    https://www.slideshare.net/iammutex/scaling-instagram
    Scaling Instagram
    AirBnB Tech Talk 2012
    Mike Krieger
    Instagram

    View Slide

  18. “surely we’ll have hired
    someone experienced
    before we actually need
    to shard”
    2 backend engineers
    can scale a system to
    30+ million users

    View Slide

  19. 2014: WhatsApp
    https://github.com/reedr/reedr/blob/master/slides/efsf2014-
    whatsapp-scaling.pdf
    1
    That's Billion with a B:
    Scaling to the next level at WhatsApp
    Rick Reed
    WhatsApp
    Erlang Factory SF
    March 7, 2014

    View Slide

  20. 10
    Hardware Platform
    ~ 550 servers + standby gear
    ~150 chat servers (~1M phones each)
    ~250 mms servers
    2x2690v2 Ivy Bridge 10-core (40 threads total)
    64-512 GB RAM
    SSD (except video)
    Dual-link GigE x 2 (public & private)
    > 11,000 cores
    4
    Numbers
    465M monthly users
    19B messages in & 40B out per day
    600M pics, 200M voice, 100M videos
    147M concurrent connections
    230K peak logins/sec
    342K peak msgs in/sec, 712K out

    View Slide

  21. Requirements
    Predictable: all important bits automated; least surprises
    Resilient: benign issues do not require human intervention
    Efficient: quick to deploy/rollback, quick to start, etc
    Secure: proper role/net segregation, minimised public footprint
    Observable: easy to monitor/intervene (say, with console)

    View Slide

  22. “Predictable”
    Provide assurance that any change will…
    ➤ run to its completion,
    ➤ in a timely manner,
    ➤ make changes atomically only, and
    ➤ provide an actionable, reasonable result,
    ➤ on either release or rollback

    View Slide

  23. “Predictable”
    Scenario 1: typo in production code
    ➤ Good: production push stopped automatically (and serenely)
    ➤ Bad: site brought down, alerts triggered, change rolled back
    Scenario 2: lots of moving parts / dependencies
    ➤ Good: flaky bits designed away, deployment always works
    ➤ Bad: constantly wiggling the snowflakes

    View Slide

  24. “Resilient”
    Common failure modes are accounted for.
    ➤ Within reason: you can probably lose an AZ but not an entire region
    Intermittent errors can be recovered from automatically.
    ➤ Let your Supervision Tree sort out the easy bits.
    ➤ Have your infrastructure pick up the hard bits.
    ➤ NB: they operate at different levels and are actually complementary

    View Slide

  25. “Resilient”
    Scenario 1: Disk filling up?
    ➤ Good: EC2 instance fails health check and is replaced automatically. New
    containers launched on the replacement instance; service continues.
    ➤ Bad: Intermittent 500s start to appear. Malaised server generates errors
    quicker than everything else. Everything is broken. The site is down.

    View Slide

  26. “Resilient”
    Scenario 2: You’ve lost your DB primary?
    ➤ Good: Site goes read-only as read replica gets promoted to primary.
    Important events queued for deferred processing. Impacted Erlang/OTP
    processes restart. You may or may not have lost a couple transactions
    (depending on how your replication is built).
    ➤ Bad: Site goes down. Restoration from yesterday’s backup will take at
    least 4 hours. Third party events dropped. R.T.O. now on the floor.

    View Slide

  27. “Efficient”
    Strong argument: deployment speed is a feature
    ➤ 30 seconds: good; 1 minute: OK, longer than 5 minutes: insufferable
    Helps reduce ceremonial role of deployments
    ➤ Much lower cost of error correction if deployments are fast
    Also helps you fulfil the inevitable “build a new stack” requests
    ➤ Staging / Test / Support / One-Off Odds & Ends / Big Migration / DR…
    ➤ You will probably get asked to “run a copy of PROD” with miracles expected

    View Slide

  28. “Efficient”
    Scenario: “We’d like to test this big feature in a safe manner”
    Good: New stack created and traffic split at DNS / LB level.
    Everybody carries on working/testing. Eventually, rollout completed.
    Bad: Big-bang rollout which inevitably fails. After much gnashing of
    teeth and lots of finger-pointing, the operators were blamed (!)
    Probably OK: use feature flags…

    View Slide

  29. “Secure”
    Put the RDBMS, Web Servers and Bastion Servers in separate places
    ➤ Utilise VPC capabilities to their fullest
    ➤ Additional layers around your systems
    Separate development/deployment/infrastructure roles
    ➤ Resource access/creation/mutation limited with dedicated roles
    ➤ Also allows you to deploy customer-hosted services as a vendor

    View Slide

  30. “Secure”
    Scenario 1: CEO signed a managed services deal with TVBCOA…
    ➤ Good: “Run this template, then give us keys from its output, which are
    limited to deployment/maintenance for this service only.”
    ➤ Bad: “Yeah we need root access to these servers just to deploy…”
    Scenario 2: Lots of frenemies in one room
    ➤ Good: each division gets their own VPC
    ➤ Bad: everything can see everything else…

    View Slide

  31. “Observable”
    Retain ability to kick the tires from time to time
    ➤ Have a mental picture of what “normal” looks like
    ➤ Different tools for Application and Infrastructure level needs
    Maintain ability to intervene quickly if required
    ➤ Little can be done by pure operators apart from scaling horizontally
    ➤ Developer access is still crucial as systems mature

    View Slide

  32. “Observable”
    Scenario: Service is “wonky”, no further information available
    ➤ Good: Console access is available. Node connected to, root cause
    identified, and ad-hoc patch seems to alleviate the problem. New release
    created and rolled out.
    ➤ Bad: Insufficient access, so a new release was needed, just to add
    logging statements…

    View Slide

  33. Part 3: Design

    View Slide

  34. Our Current Setup — Networking
    One custom (non-default) VPC for Production, One for Development
    ➤ Some older AWS EC2 customers actually do not have a “Default VPC”!
    ➤ Even if they do, it would be poor form to put everything in there
    Three Subnets per Availability Zone
    ➤ One each for Public, Private and Data services: Public → Private, Private → Data
    ➤ For Internet Access (NTP, etc): Private → Public, Data → Public
    ➤ You can put in a S3 Gateway if you wish: cheaper S3 access!

    View Slide

  35. Our Current Setup — Application
    Fully utilise AWS Container Service
    ➤ AWS-managed Kubernetes would be great.
    ➤ ECS hosts spread around all available AZs
    Fully utilise Docker for production builds
    ➤ We’re all using Macs anyway
    ➤ Development on macOS, Release/Production on Linux
    ➤ Helps catch system-specific issues

    View Slide

  36. Our Current Setup — Management
    AWS CloudFormation for nearly everything
    ➤ This is much simpler than writing an Infrastructure Deployment Guide
    ➤ 100% repeatable, no churn, no faffing about
    ➤ Checked into Git and Version Controlled
    Locally run Bash scripts for everything else
    ➤ The choice is simple: either run Bash locally or trust/run Lambda Functions remotely. I’d
    rather do it locally
    ➤ Checked into Git and Version Controlled

    View Slide

  37. Our Scripts
    Create Stack in CloudFormation
    Update Resources in CloudFormation
    Deploy a Release
    Retrieve Platform Logs
    Watch Application Logs (Coming Soon)
    Attach Console (coming soon)

    View Slide

  38. Script: Create Stack
    Grab Input and Parameters
    ➤ Establish and pin dependency against upstream stacks
    ➤ Pinned resources can not change upstream
    Run Stack (as Administrator)
    ➤ You can make more Administrator accounts using IAM
    ➤ Meta: you could also build a self-service administrator account in IAM, which
    can only update the stack which created it.

    View Slide

  39. Script: Update Resources
    Validate Environment
    Open a PostgreSQL connection to RDS
    Create Database with the right encoding/collation
    Create Role with the right permissions
    Run Seed Script/Migrations

    View Slide

  40. Script: Deploy Release
    Validate Environment
    Build Containers with Docker Compose
    ➤ The same images previously used for validation are reused
    Push Images to ECS Container Repository
    Revise ECS Task Definition
    Revise ECS Service Definition

    View Slide

  41. {
    "family": "default",
    "containerDefinitions": [
    {
    "name": "web",
    "image": "ECS_TASK_IMAGE_WEB",
    "memoryReservation": 896,
    "memory": 1792,
    "essential": true,
    "privileged": false,
    "portMappings": [
    {
    "containerPort": 5000
    }
    ],
    "environment": [
    {"name": "NODE_COOKIE", "value": "ENVIRONMENT_NODE_COOKIE"},
    {"name": "HOST", "value": "ENVIRONMENT_HOST"},
    {"name": "PORT", "value": “5000”},

    ],
    "logConfiguration": {
    "logDriver": "awslogs",
    "options": {
    "awslogs-group": "LOG_CONFIGURATION_GROUP",
    "awslogs-region": "LOG_CONFIGURATION_REGION",
    "awslogs-stream-prefix": "LOG_CONFIGURATION_STREAM_PREFIX"
    }
    }
    }
    ]
    }

    View Slide

  42. {
    "cluster": "ECS_CLUSTER_NAME",
    "serviceName": "ECS_SERVICE_NAME",
    "taskDefinition": "ECS_TASK_DEFINITION",
    "loadBalancers": [
    {
    "targetGroupArn": "ELB_TARGET_GROUP_ARN",
    "containerName": "web",
    "containerPort": 5000
    }
    ],
    "desiredCount": 2,
    "clientToken": "NONCE",
    "role": "ECS_SERVICE_ROLE_ARN",
    "deploymentConfiguration": {
    "maximumPercent": 150,
    "minimumHealthyPercent": 50
    }
    }

    View Slide

  43. {
    "cluster": "ECS_CLUSTER_NAME",
    "service": "ECS_SERVICE_NAME",
    "desiredCount": 2,
    "taskDefinition": "ECS_TASK_DEFINITION",
    "deploymentConfiguration": {
    "maximumPercent": 150,
    "minimumHealthyPercent": 50
    }
    }

    View Slide

  44. Script: Retrieve Platform Logs
    Validate Environment
    Query CFN/EC2/ECS for environmental particulars
    Retrieve latest events from ECS
    ➤ Has the cluster entered a stable state?
    Retrieve EC2 instances and their particulars
    Print metrics of particular interest (RAM/CPU utilisation, etc)

    View Slide

  45. In the meantime, we use the awslogs tool to print CloudWatch Logs
    ➤ $ awslogs get log-group ALL --watch --start='15m ago'
    ➤ jorgebastida/awslogs
    Future Script: Application Logs

    View Slide

  46. In the meantime, run docker exec…
    ➤ $ docker ps # find a container
    ➤ $ docker exec -it (container) iex -S mix
    Also for later consideration: attach a local Erlang node to VPC infra
    ➤ This is a straightforward matter of lining the ports up
    ➤ You could even do an ad-hoc task if you wish, but it’d be slower
    Future Script: Attach Console

    View Slide

  47. Part 4: Implementation

    View Slide

  48. Docker Everywhere
    Docker is basically a way to quickly provision servers
    ➤ Essential for automatic failover
    ➤ You can attach volumes for stuff you wish to keep
    Alternative: Cloud Init, Custom IAM, Ansible / Puppet
    ➤ Either slower (re-provisioning everything dynamically takes minutes to
    hours), or less efficient (entire IAMs need to be rebuilt: is your build
    infrastructure also reproducible?)

    View Slide

  49. version: '3.2'
    services:
    web:
    build:
    context: .
    dockerfile: infra/docker-web/Dockerfile
    image: org/app:web
    environment:
    - DATABASE_URL=postgres://p:[email protected]:5432/d
    - …
    - HOST
    - PORT=5000
    - NODE_NAME
    - NODE_COOKIE=app-docker-compose
    links:
    - postgres
    postgres:
    build:
    context: .
    dockerfile: ./infra/docker-postgres/Dockerfile
    image: org/app:postgres
    environment:
    - POSTGRES_DB=d
    - POSTGRES_USER=u
    - POSTGRES_PASSWORD=p
    ports:
    - "5432:5432"
    nginx:
    image: quay.io/aptible/nginx
    environment:
    - UPSTREAM_SERVERS=web:5000
    - FORCE_SSL=true
    ports:
    - "5000:443"
    links:
    - web

    View Slide

  50. FROM elixir:1.4.4
    ENV MIX_ENV=prod
    RUN mix local.hex --force && mix local.rebar --force
    COPY rel/config.exs rel/vm.args /app/rel/
    COPY mix.exs /app/mix.exs
    COPY mix.lock /app/mix.lock
    COPY config/config.exs /app/config/config.exs
    COPY config/prod.exs /app/config/prod.exs
    RUN cd /app && mix deps.get && mix deps.compile
    COPY app /app/app/
    COPY lib /app/lib
    COPY priv /app/priv
    RUN cd /app && mix release --env=prod
    COPY infra/docker-web/start.sh /app/
    WORKDIR /app
    CMD ./start.sh

    View Slide

  51. Docker Everywhere: Notes
    If you attach a volume, you could get the artefacts out.
    ➤ This gets you the underlying Erlang/OTP release
    ➤ You could then wrap it in another delivery mechanism
    ➤ Erlang on Xen, perhaps?

    View Slide

  52. Ecto + SSL + Postgres
    Ecto allows you to enable encryption
    ➤ Heroku enforces it
    ➤ Other security-minded people like it
    ➤ You may have it turned on right now
    Now how do you test it…

    View Slide

  53. View Slide

  54. FROM postgres:9.6.3-alpine
    COPY infra/docker-postgres/initdb /docker-entrypoint-initdb.d
    COPY priv/repo/structure.sql /docker-entrypoint-initdb.d/structure.sql

    View Slide

  55. ** intense openssl action intentionally left blank **
    you get a self-signed thing
    and you commit it

    View Slide

  56. #!/bin/bash
    set -e
    #
    # Copy the key pair to $PGDATA. The key pair has been
    # generated manually beforehand, as the container running
    # PostgreSQL does not have OpenSSL exposed anyway.
    #
    # https://www.postgresql.org/docs/9.1/static/ssl-tcp.html
    # See: 17.9.3. Creating a Self-signed Certificate
    #
    cp /docker-entrypoint-initdb.d/server.{crt,key} "$PGDATA"
    chown postgres:postgres "$PGDATA"/server.{crt,key}
    chmod 0600 "$PGDATA"/server.key
    #
    # Given that this is a development container,
    # we do not wish to play with PostgreSQL configuration too much
    # therefore a simple line appended to the end of the configuration
    # file will suffice.
    #
    echo "ssl = on" >> "$PGDATA/postgresql.conf"

    View Slide

  57. SSL/TLS Serving + Steering
    When a customer accesses your HTTP endpoint, redirect to HTTPS
    ➤ Much better for applications: better than throwing an error
    ➤ Not really needed for programmatic access: just fail straight away
    X-Forwarded-Proto header
    ➤ Emitted by Heroku and AWS Elastic Load Balancer
    ➤ Supported by Plug.SSL and therefore Phoenix’s Endpoint
    ➤ config :app, Endpoint, force_ssl: [rewrite_on:
    [:x_forwarded_proto]]

    View Slide

  58. Be Mindful of ELB Health Checks
    With TLS offloaded to the Elastic Load Balancer, all ELB / App
    interaction will be conducted in HTTP, including health checks.
    With HTTPS enforcement, health checks by default get 301. Adjust
    your Matcher accordingly to avoid knocking your site offline.

    View Slide

  59. "DefaultTargetGroup": {
    "Type": "AWS::ElasticLoadBalancingV2::TargetGroup",
    "Properties": {
    "Name": {"Fn::Sub": "${AWS::StackName}-default"},
    "VpcId": {
    "Fn::If": [
    "IsVPCIdSpecified",
    {"Ref": "VPCId"},
    {"Fn::ImportValue": {"Fn::Sub": "${VPCStackName}:VPC"}}
    ]
    },
    "Port": "80",
    "Protocol": "HTTP",
    "Matcher": {
    "HttpCode": "200,301,302,307"
    },
    "TargetGroupAttributes": [
    {"Key": "deregistration_delay.timeout_seconds", "Value": 30},
    {"Key": "stickiness.enabled", "Value": false}
    ]
    }
    }

    View Slide

  60. Clustering
    De facto entry point: libCluster
    ➤ https://github.com/bitwalker/libcluster
    ➤ Out-of-the-Box Support for:
    ➤ EPMD ([email protected] predefined)
    ➤ Gossip (UDP Multicast),
    ➤ Kubernetes
    ➤ You can provide a custom strategy

    View Slide

  61. Clustering: Gossip in VPC?
    Multicast is not really supported in AWS
    ➤ https://aws.amazon.com/articles/6234671078671125
    ➤ Establish VPN tunnels between each host., then fake it…
    You could emulate UDP multicast in AWS if you really want to
    ➤ A lot of work, though!
    ➤ Consider what the actual reward

    View Slide

  62. 2013: IP Multicast on EC2
    https://www.slideshare.net/kentayasukawa/ip-multicast-on-ec2
    ➤ Kenta Yasukawa, Cofounder & CTO, SORACOM, Inc.

    View Slide

  63. View Slide

  64. Clustering: Roll Your Own?
    Run a state service behind a load-balancer / proxy
    ➤ Perhaps etcd or riak, but you could use Redis/Postgres if you wish
    Each Erlang/OTP node puts in an expiring “heartbeat”
    ➤ {nodename, host, port, cookie, memo}, TTL= 30 seconds
    Each Erlang/OTP node then tries to connect with each other
    ➤ Custom libCluster strategy needed

    View Slide

  65. Clustering: Work Partitioning
    Once you have a cluster, you may wish to have work partitioned
    among your nodes.
    ➤ You can utilise Riak Core once you get a cluster going
    ➤ Alternatively: you can take a distributed lock and allocate from there
    ➤ Many other ways to distribute work — find the best for your application

    View Slide

  66. Node Name / Node Host in ECS
    We ended up customising vm.args
    ➤ Pass a “placeholder” Erlang cookie, so we can deploy many revisions of
    something together without them talking to each other
    ➤ Actual Erlang Cookie kept in an AWS SSM Parameter
    Also customised the startup script
    ➤ Dynamically generate the Node Name to avoid clashes

    View Slide

  67. # rel/config.exs
    environment :prod do
    set include_erts: false
    set include_src: false
    set cookie: :placeholder
    set vm_args: "./rel/vm.args"
    end
    # rel/vm.args
    ## Name of the node
    -name ${NODE_NAME}
    ## Cookie for distributed erlang
    -setcookie ${NODE_COOKIE}

    View Slide

  68. # actual startup script run by ECS
    #!/usr/bin/env sh
    if [ -z $NODE_NAME ]; then
    nonce=$(cat /dev/urandom | \
    LC_ALL=C tr -dc ‘a-zA-Z0-9' | \
    fold -w 32 | head -n 1)
    hostname=$(hostname -I)
    export NODE_NAME="[email protected]$hostname"
    fi
    if [ -z $NODE_COOKIE ]; then
    export NODE_COOKIE="app-docker"
    fi
    export REPLACE_OS_VARS=true
    cd /app && ./releases/app/bin/app foreground

    View Slide

  69. Secure SSH Tunneling + Proxying
    Actually open a PostgreSQL connection to RDS
    ➤ ssh -L local_port:remote_host:remote_port
    ➤ Tunnel from Laptop to Bastion
    ➤ Tunnel from Bastion to any ECS Host
    ➤ You must have at least one host in there anyway
    ➤ Result: localhost:5436 maps to RDS:5432 — success!
    ➤ Postgres should be configured to use SSL only

    View Slide

  70. Seeding Data
    This should be part of infrastructure provisioning
    ➤ Goal: once the stack is handed off to the development team, it is already
    running the application in a fully functional fashion off the master branch
    You could tunnel to RDS and build up the state yourself
    ➤ Good way to consistently exercise the “Seed” file
    ➤ Good way to expose incorrect migrations too

    View Slide

  71. Migrations
    There’s a script available in Distillery documentation
    ➤ https://hexdocs.pm/distillery/running-migrations.html#content
    It is a good starting point
    ➤ However, your migrations should always run with only Repo running
    ➤ This means no application dependencies
    ➤ iex -S mix --no-start
    ➤ Treat Ecto migrations as a stable Elixir to SQL transformation

    View Slide

  72. Migrations: Sanity Check?
    If you run this on your application right now, will they work?
    ➤ mix ecto.drop
    ➤ mix ecto.create
    ➤ mix ecto.migrate
    ➤ mix run priv/repo/seeds.exs
    ➤ mix ecto.rollback --all
    ➤ mix ecto.migrate
    ➤ mix run priv/repo/seeds.exs

    View Slide

  73. Interactive Elixir: Pro Tip
    ➤ You now have a supervision tree!
    ➤ However, you may not want to run everything when starting iex.
    ➤ Wouldn’t be good if a remote node started sharing production workloads
    ➤ You can check :init.get_arguments[:user]
    ➤ Does it have ’Elixir.IEx.CLI’?
    ➤ Optionally skip certain bits of your supervision tree

    View Slide

  74. Observe Nodes
    There are some tools available
    ➤ http://www.brendangregg.com/linuxperf.html
    ➤ https://github.com/utkarshkukreti/ex_top
    ➤ http://zhongwencool.github.io/observer_cli/
    ➤ https://github.com/shinyscorpion/wobserver
    ➤ https://github.com/IanLuites/wobserver-elixirconf-2017

    View Slide

  75. Part 5: Demo

    View Slide

  76. View Slide

  77. Thank You!

    View Slide