this talk Part 1. D/OX • what is D/OX? • development cycle • how to improve D/OX? • 4 automations ◦ testing ◦ scaling ◦ deployment ☆ ◦ logging/monitoring Part 2. Elixir Deployment • modern web app infra • elixir impedance mismatch • hot code swap or CD? • our deployment journey • elixir deployment tao Part3. Summary • conclusion
to improve DX/OX • understand how to deploy elixir apps • understand how to automate development works with elixir • introducing our trial and error of elixir app’s deployment attention! - This talk has little elixir source code.
how comfortable the engineer can develop the system • Operation Experience ◦ An indicator of how comfortable the engineer can operate the system • Good D/OX ◦ Development/Operation is fun! ◦ Small technical debt ◦ Automation • Bad D/OX ◦ Development/Operation is painful! ◦ Big/Huge technical debt ◦ Manual operations ◦ D/OX will deteriorate if left unattended
tests with CI ◦ unit test: exunit ◦ lint: credo/dogma ◦ static analysis: dialyxir ◦ coverage: excoveralls • load test / stress test ◦ setup “stress” MIX_ENV ◦ same level as dev env, test env and prod env ◦ stress branch => hard to cherry-pick ◦ stress MIX_ENV => easy to manage • frequent load testing reduce performance problems ◦ maintain load test scenario like unit tests ◦ easy to run load tests
server stops. ◦ The network is going down. ◦ Replace the server if it breaks. ▪ AWS: EC2 gacha • cloud native ◦ devops ◦ continuous delivery ◦ containers ◦ microservice • running on docker/k8s • serverless
CPU sche duler CPU sche duler sche duler sche duler sche duler P P P P P P P Node master Node Node k8s cluster pod pod pod pod pod pod container container container Erlang VM CPU CPU scheduler scheduler high CPU efficiency run many (small CPU/Mem) Pods k8s allocate CPU&Mem high infra cost efficiency
small resource(CPU,Mem)? ◦ running on Instance : => large resource ◦ running on k8s : => small resource • load test and monitoring ◦ DON’T GUESS, MEASURE • easy to (load) test • monitoring BEAM metrics
-S mix • phase1 : local PC / daemonize, mix release • phase2 : single server / ssh login, git pull, mix release • phase3 : single server / deploy with CI • phase4 : multi server / package on deploy server, use S3 for delivery • phase5 : multi server / deploy with autoscaling • phase6 : multi server / deploy with edeliver/distirelly • phase7 : multi server / deploy with autoscaling • phase8 : k8s cluster