Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Docker for Ruby Developers
Search
Frank Macreery
August 11, 2015
Programming
3
650
Docker for Ruby Developers
Originally presented at NYC.rb.
Frank Macreery
August 11, 2015
Tweet
Share
More Decks by Frank Macreery
See All by Frank Macreery
Aptible + TelePharm: HIPAA for Startups
fancyremarker
0
1.4k
Architecting Applications for HIPAA Compliance
fancyremarker
0
210
Containerization and Compliance
fancyremarker
0
550
HIPAA Dev Ops: Architecting Layers of Responsibility
fancyremarker
0
76
Garner: Anatomy of a Ruby Gem
fancyremarker
0
350
Other Decks in Programming
See All in Programming
副作用をどこに置くか問題:オブジェクト指向で整理する設計判断ツリー
koxya
1
590
なるべく楽してバックエンドに型をつけたい!(楽とは言ってない)
hibiki_cube
0
140
FOSDEM 2026: STUNMESH-go: Building P2P WireGuard Mesh Without Self-Hosted Infrastructure
tjjh89017
0
150
【卒業研究】会話ログ分析によるユーザーごとの関心に応じた話題提案手法
momok47
0
190
16年目のピクシブ百科事典を支える最新の技術基盤 / The Modern Tech Stack Powering Pixiv Encyclopedia in its 16th Year
ahuglajbclajep
5
990
IFSによる形状設計/デモシーンの魅力 @ 慶應大学SFC
gam0022
1
300
そのAIレビュー、レビューしてますか? / Are you reviewing those AI reviews?
rkaga
6
4.5k
なぜSQLはAIぽく見えるのか/why does SQL look AI like
florets1
0
450
Apache Iceberg V3 and migration to V3
tomtanaka
0
150
20260127_試行錯誤の結晶を1冊に。著者が解説 先輩データサイエンティストからの指南書 / author's_commentary_ds_instructions_guide
nash_efp
0
920
React 19でつくる「気持ちいいUI」- 楽観的UIのすすめ
himorishige
11
6k
Architectural Extensions
denyspoltorak
0
280
Featured
See All Featured
Speed Design
sergeychernyshev
33
1.5k
Understanding Cognitive Biases in Performance Measurement
bluesmoon
32
2.8k
SERP Conf. Vienna - Web Accessibility: Optimizing for Inclusivity and SEO
sarafernandez
1
1.3k
RailsConf & Balkan Ruby 2019: The Past, Present, and Future of Rails at GitHub
eileencodes
141
34k
Automating Front-end Workflow
addyosmani
1371
200k
The B2B funnel & how to create a winning content strategy
katarinadahlin
PRO
0
270
Optimising Largest Contentful Paint
csswizardry
37
3.6k
Money Talks: Using Revenue to Get Sh*t Done
nikkihalliwell
0
150
Exploring anti-patterns in Rails
aemeredith
2
250
Typedesign – Prime Four
hannesfritz
42
2.9k
16th Malabo Montpellier Forum Presentation
akademiya2063
PRO
0
47
Self-Hosted WebAssembly Runtime for Runtime-Neutral Checkpoint/Restore in Edge–Cloud Continuum
chikuwait
0
320
Transcript
Docker for Ruby Developers NYC.rb, August 2015 Frank Macreery CTO,
Aptible @fancyremarker https://speakerdeck.com/fancyremarker
None
Docker What’s it all about?
None
None
Docker Containers: The new virtual machines?
https://www.docker.com/whatisdocker Then: Virtual Machines
https://www.docker.com/whatisdocker
https://www.docker.com/whatisdocker Now: Containers
https://www.docker.com/whatisdocker
Getting Started with Docker
boot2docker
Kitematic
Getting Started with Docker Install boot2docker via http://boot2docker.io/#installation echo 'eval
"$(boot2docker shellinit)"' >> ~/.bashrc
Getting Started with Docker Alternatively (if you have Homebrew and
VirtualBox installed)… brew install boot2docker brew install docker
Getting Started with Docker Images and containers
docker pull quay.io/aptible/nginx:latest
docker pull quay.io/aptible/nginx:latest Reference to a Docker image
docker pull quay.io/aptible/nginx:latest Image "repositories" can have many "tags"
docker pull quay.io/aptible/nginx:latest Pulls an image from a Docker "registry"
A single Docker image consists of many "layers"
docker images
docker run -d -p 80:80 -p 443:443 \ quay.io/aptible/nginx:latest
docker run -d -p 80:80 -p 443:443 \ quay.io/aptible/nginx:latest Launches
a new container from an existing image
docker run -d -p 80:80 -p 443:443 \ quay.io/aptible/nginx:latest Forward
container ports to the host (host port:container port)
docker run -d -p 80:80 -p 443:443 \ quay.io/aptible/nginx:latest Run
container in background
docker ps
None
None
Simplifying SOA Service-oriented architectures without the complexity
Dev/prod parity?
None
What makes dev/prod parity so hard?
What makes dev/prod parity so hard? 1 production deployment, many
development/ staging environments
What makes dev/prod parity so hard? SOA simplifies each service’s
responsibilities, but often at the cost of additional deployment complexity
What makes dev/prod parity so hard? The more services you
have, the harder it is to achieve dev/prod parity
What makes dev/prod parity so hard? The more engineers you
have, the harder it is to standardize dev parity
None
None
None
README != Automation
Dev/prod parity via Docker
Dev/prod parity via Docker Define services in terms of Docker
images
# docker-compose.yml for Aptible Auth/API auth: build: auth.aptible.com/ ports: "4000:4000"
links: - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_auth_development api: build: api.aptible.com/ ports: "4001:4001" links: - redis - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_api_development REDIS_URL: redis://redis APTIBLE_AUTH_ROOT_URL: http://docker:4000 redis: image: quay.io/aptible/redis:latest ports: "6379:6379" postgresql: image: quay.io/aptible/postgresql:aptible-seeds ports: "5432:5432"
# docker-compose.yml for Aptible Auth/API auth: build: auth.aptible.com/ ports: "4000:4000"
links: - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_auth_development api: build: api.aptible.com/ ports: "4001:4001" links: - redis - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_api_development REDIS_URL: redis://redis APTIBLE_AUTH_ROOT_URL: http://docker:4000 redis: image: quay.io/aptible/redis:latest ports: "6379:6379" postgresql: image: quay.io/aptible/postgresql:aptible-seeds ports: "5432:5432"
# docker-compose.yml for Aptible Auth/API auth: build: auth.aptible.com/ ports: "4000:4000"
links: - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_auth_development api: build: api.aptible.com/ ports: "4001:4001" links: - redis - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_api_development REDIS_URL: redis://redis APTIBLE_AUTH_ROOT_URL: http://docker:4000 redis: image: quay.io/aptible/redis:latest ports: "6379:6379" postgresql: image: quay.io/aptible/postgresql:aptible-seeds ports: "5432:5432"
# docker-compose.yml for Aptible Auth/API auth: build: auth.aptible.com/ ports: "4000:4000"
links: - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_auth_development api: build: api.aptible.com/ ports: "4001:4001" links: - redis - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_api_development REDIS_URL: redis://redis APTIBLE_AUTH_ROOT_URL: http://docker:4000 redis: image: quay.io/aptible/redis:latest ports: "6379:6379" postgresql: image: quay.io/aptible/postgresql:aptible-seeds ports: "5432:5432"
Dev/prod parity via Docker Use the same service/image configuration in
production as in development (Docker Compose, Swarm, Kubernetes…)
Containerized SSL Infrastructure management made easy
Elastic Load Balancer (ELB) EC2 Instance EC2 Instance NGiNX NGiNX
TCP/HTTPS TCP/HTTPS HTTP HTTP
How to configure NGiNX with multiple dynamic upstreams? Chef? Salt?
Ansible?
ENV configuration $UPSTREAM_SERVERS
docker run -d -p 80:80 -p 443:443 \ -e UPSTREAM_SERVERS=docker:4000,docker:4001
\ quay.io/aptible/nginx:latest
docker run -d -p 80:80 -p 443:443 \ -e UPSTREAM_SERVERS=docker:4000,docker:4001
\ quay.io/aptible/nginx:latest
ENV configuration Makes testing easier
https://github.com/sstephenson/bats
# Dockerfile # Install and configure NGiNX... # ... ADD
test /tmp/test RUN bats /tmp/test https://github.com/aptible/docker-nginx Image: quay.io/aptible/nginx
#!/usr/bin/env bats # /tmp/test/nginx.bats @test "It should accept a list
of UPSTREAM_SERVERS" { simulate_upstream UPSTREAM_SERVERS=localhost:4000 wait_for_nginx run curl localhost 2>/dev/null [[ "$output" =~ "Hello World!" ]] }
#!/usr/bin/env bats # /tmp/test/nginx.bats @test "It should accept a list
of UPSTREAM_SERVERS" { simulate_upstream UPSTREAM_SERVERS=localhost:4000 wait_for_nginx run curl localhost 2>/dev/null [[ "$output" =~ "Hello World!" ]] }
@test "It should accept a list of UPSTREAM_SERVERS" { simulate_upstream
UPSTREAM_SERVERS=localhost:4000 wait_for_nginx run curl localhost 2>/dev/null [[ "$output" =~ "Hello World!" ]] } simulate_upstream() { nc -l -p 4000 127.0.0.1 < upstream-response.txt }
ENV configuration Abstracts implementation details: could be NGiNX, HAProxy, …
ENV configuration Simplifies configuration management: central store doesn’t need to
know parameters in advance
ENV configuration $UPSTREAM_SERVERS $FORCE_SSL $DISABLE_WEAK_CIPHER_SUITES (…)
Vulnerability Response Fix, test, docker push, restart
Heartbleed
Heartbleed POODLEbleed
Heartbleed POODLEbleed xBleed???
Integration Tests Document and test every vulnerability response
#!/usr/bin/env bats # /tmp/test/nginx.bats @test "It should pass an external
Heartbleed test" { install_heartbleed wait_for_nginx Heartbleed localhost:443 uninstall_heartbleed }
@test "It should pass an external Heartbleed test" { install_heartbleed
wait_for_nginx Heartbleed localhost:443 uninstall_heartbleed } install_heartbleed() { export GOPATH=/tmp/gocode export PATH=${PATH}:/usr/local/go/bin:${GOPATH}/bin go get github.com/FiloSottile/Heartbleed go install github.com/FiloSottile/Heartbleed }
Integration tests happen during each image build
Integration tests happen during each image build Images are built
automatically via Quay Build Triggers
None
Integration tests happen during each image build Build status is
easy to verify at a glance
None
Integration tests happen during each image build Quay Time Machine
lets us roll back an image to any previous state
None
Database Deployment Standardizing an "API" across databases
None
New databases mean new dependencies
New databases mean new dependencies How to document setup steps
for engineers?
New databases mean new dependencies How to deploy in production?
New databases mean new dependencies How to perform common admin
tasks? Backups? Replication? CLI access? Read-only mode?
Wrap Databases in a Uniform API Standardizing an "API" across
databases
#!/bin/bash # run-database.sh command="/usr/lib/postgresql/$PG_VERSION/bin/postgres -D "$DATA_DIRECTORY" -c config_file=/etc/postgresql/ $PG_VERSION/main/postgresql.conf" if
[[ "$1" == "--initialize" ]]; then chown -R postgres:postgres "$DATA_DIRECTORY" su postgres <<COMMANDS /usr/lib/postgresql/$PG_VERSION/bin/initdb -D "$DATA_DIRECTORY" /etc/init.d/postgresql start psql --command "CREATE USER ${USERNAME:-aptible} WITH SUPERUSER PASSWORD '$PASSPHRASE'" psql --command "CREATE DATABASE ${DATABASE:-db}" /etc/init.d/postgresql stop COMMANDS elif [[ "$1" == "--client" ]]; then [ -z "$2" ] && echo "docker run -it aptible/postgresql --client postgresql://..." && exit psql "$2" elif [[ "$1" == "--dump" ]]; then [ -z "$2" ] && echo "docker run aptible/postgresql --dump postgresql://... > dump.psql" && exit pg_dump "$2" elif [[ "$1" == "--restore" ]]; then [ -z "$2" ] && echo "docker run -i aptible/postgresql --restore postgresql://... < dump.psql" && exit psql "$2"
#!/bin/bash # run-database.sh command="/usr/lib/postgresql/$PG_VERSION/bin/postgres -D "$DATA_DIRECTORY" -c config_file=/etc/postgresql/ $PG_VERSION/main/postgresql.conf" if
[[ "$1" == "--initialize" ]]; then chown -R postgres:postgres "$DATA_DIRECTORY" su postgres <<COMMANDS /usr/lib/postgresql/$PG_VERSION/bin/initdb -D "$DATA_DIRECTORY" /etc/init.d/postgresql start psql --command "CREATE USER ${USERNAME:-aptible} WITH SUPERUSER PASSWORD '$PASSPHRASE'" psql --command "CREATE DATABASE ${DATABASE:-db}" /etc/init.d/postgresql stop COMMANDS elif [[ "$1" == "--client" ]]; then [ -z "$2" ] && echo "docker run -it aptible/postgresql --client postgresql://..." && exit psql "$2" elif [[ "$1" == "--dump" ]]; then [ -z "$2" ] && echo "docker run aptible/postgresql --dump postgresql://... > dump.psql" && exit pg_dump "$2" elif [[ "$1" == "--restore" ]]; then [ -z "$2" ] && echo "docker run -i aptible/postgresql --restore postgresql://... < dump.psql" && exit psql "$2"
--initialize: Initialize data directory --client: Start a CLI client --dump:
Dump database to STDOUT --restore: Restore from dump --readonly: Start database in RO mode
db-launch () { container=$(head -c 32 /dev/urandom | md5); passphrase=${PASSPHRASE:-foobar};
image="${@: -1}"; docker create --name $container $image docker run --volumes-from $container \ -e USERNAME=aptible -e PASSPHRASE=$passphrase \ -e DB=db $image --initialize docker run --volumes-from $container $@ } http://bit.ly/aptible-dblaunch
docker create --name $container $image http://bit.ly/aptible-dblaunch 1. Create "volume container"
docker run --volumes-from $container \ -e USERNAME=aptible -e PASSPHRASE=$passphrase \
-e DB=db $image --initialize http://bit.ly/aptible-dblaunch 2. Initialize database data volume
docker run --volumes-from $container $@ http://bit.ly/aptible-dblaunch 3. Run database
None
Thank you
@fancyremarker
[email protected]
https://speakerdeck.com/fancyremarker