Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Docker for Ruby Developers
Search
Frank Macreery
August 11, 2015
Programming
3
650
Docker for Ruby Developers
Originally presented at NYC.rb.
Frank Macreery
August 11, 2015
Tweet
Share
More Decks by Frank Macreery
See All by Frank Macreery
Aptible + TelePharm: HIPAA for Startups
fancyremarker
0
1.4k
Architecting Applications for HIPAA Compliance
fancyremarker
0
210
Containerization and Compliance
fancyremarker
0
550
HIPAA Dev Ops: Architecting Layers of Responsibility
fancyremarker
0
76
Garner: Anatomy of a Ruby Gem
fancyremarker
0
350
Other Decks in Programming
See All in Programming
AIによるイベントストーミング図からのコード生成 / AI-powered code generation from Event Storming diagrams
nrslib
1
450
リリース時」テストから「デイリー実行」へ!開発マネージャが取り組んだ、レガシー自動テストのモダン化戦略
goataka
0
160
Flutter On-device AI로 완성하는 오프라인 앱, 박제창 @DevFest INCHEON 2025
itsmedreamwalker
1
180
2年のAppleウォレットパス開発の振り返り
muno92
PRO
0
180
「コードは上から下へ読むのが一番」と思った時に、思い出してほしい話
panda728
PRO
39
26k
Developing static sites with Ruby
okuramasafumi
1
340
SQL Server 2025 LT
odashinsuke
0
120
Go コードベースの構成と AI コンテキスト定義
andpad
0
150
Graviton と Nitro と私
maroon1st
0
160
例外処理とどう使い分ける?Result型を使ったエラー設計 #burikaigi
kajitack
8
2k
20251212 AI 時代的 Legacy Code 營救術 2025 WebConf
mouson
0
240
QAフローを最適化し、品質水準を満たしながらリリースまでの期間を最短化する #RSGT2026
shibayu36
0
1k
Featured
See All Featured
Max Prin - Stacking Signals: How International SEO Comes Together (And Falls Apart)
techseoconnect
PRO
0
58
[Rails World 2023 - Day 1 Closing Keynote] - The Magic of Rails
eileencodes
38
2.7k
Bash Introduction
62gerente
615
210k
Sam Torres - BigQuery for SEOs
techseoconnect
PRO
0
150
Practical Tips for Bootstrapping Information Extraction Pipelines
honnibal
25
1.7k
Writing Fast Ruby
sferik
630
62k
Ethics towards AI in product and experience design
skipperchong
1
150
sira's awesome portfolio website redesign presentation
elsirapls
0
100
Building AI with AI
inesmontani
PRO
1
610
Java REST API Framework Comparison - PWX 2021
mraible
34
9.1k
Navigating Algorithm Shifts & AI Overviews - #SMXNext
aleyda
0
1.1k
Put a Button on it: Removing Barriers to Going Fast.
kastner
60
4.1k
Transcript
Docker for Ruby Developers NYC.rb, August 2015 Frank Macreery CTO,
Aptible @fancyremarker https://speakerdeck.com/fancyremarker
None
Docker What’s it all about?
None
None
Docker Containers: The new virtual machines?
https://www.docker.com/whatisdocker Then: Virtual Machines
https://www.docker.com/whatisdocker
https://www.docker.com/whatisdocker Now: Containers
https://www.docker.com/whatisdocker
Getting Started with Docker
boot2docker
Kitematic
Getting Started with Docker Install boot2docker via http://boot2docker.io/#installation echo 'eval
"$(boot2docker shellinit)"' >> ~/.bashrc
Getting Started with Docker Alternatively (if you have Homebrew and
VirtualBox installed)… brew install boot2docker brew install docker
Getting Started with Docker Images and containers
docker pull quay.io/aptible/nginx:latest
docker pull quay.io/aptible/nginx:latest Reference to a Docker image
docker pull quay.io/aptible/nginx:latest Image "repositories" can have many "tags"
docker pull quay.io/aptible/nginx:latest Pulls an image from a Docker "registry"
A single Docker image consists of many "layers"
docker images
docker run -d -p 80:80 -p 443:443 \ quay.io/aptible/nginx:latest
docker run -d -p 80:80 -p 443:443 \ quay.io/aptible/nginx:latest Launches
a new container from an existing image
docker run -d -p 80:80 -p 443:443 \ quay.io/aptible/nginx:latest Forward
container ports to the host (host port:container port)
docker run -d -p 80:80 -p 443:443 \ quay.io/aptible/nginx:latest Run
container in background
docker ps
None
None
Simplifying SOA Service-oriented architectures without the complexity
Dev/prod parity?
None
What makes dev/prod parity so hard?
What makes dev/prod parity so hard? 1 production deployment, many
development/ staging environments
What makes dev/prod parity so hard? SOA simplifies each service’s
responsibilities, but often at the cost of additional deployment complexity
What makes dev/prod parity so hard? The more services you
have, the harder it is to achieve dev/prod parity
What makes dev/prod parity so hard? The more engineers you
have, the harder it is to standardize dev parity
None
None
None
README != Automation
Dev/prod parity via Docker
Dev/prod parity via Docker Define services in terms of Docker
images
# docker-compose.yml for Aptible Auth/API auth: build: auth.aptible.com/ ports: "4000:4000"
links: - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_auth_development api: build: api.aptible.com/ ports: "4001:4001" links: - redis - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_api_development REDIS_URL: redis://redis APTIBLE_AUTH_ROOT_URL: http://docker:4000 redis: image: quay.io/aptible/redis:latest ports: "6379:6379" postgresql: image: quay.io/aptible/postgresql:aptible-seeds ports: "5432:5432"
# docker-compose.yml for Aptible Auth/API auth: build: auth.aptible.com/ ports: "4000:4000"
links: - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_auth_development api: build: api.aptible.com/ ports: "4001:4001" links: - redis - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_api_development REDIS_URL: redis://redis APTIBLE_AUTH_ROOT_URL: http://docker:4000 redis: image: quay.io/aptible/redis:latest ports: "6379:6379" postgresql: image: quay.io/aptible/postgresql:aptible-seeds ports: "5432:5432"
# docker-compose.yml for Aptible Auth/API auth: build: auth.aptible.com/ ports: "4000:4000"
links: - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_auth_development api: build: api.aptible.com/ ports: "4001:4001" links: - redis - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_api_development REDIS_URL: redis://redis APTIBLE_AUTH_ROOT_URL: http://docker:4000 redis: image: quay.io/aptible/redis:latest ports: "6379:6379" postgresql: image: quay.io/aptible/postgresql:aptible-seeds ports: "5432:5432"
# docker-compose.yml for Aptible Auth/API auth: build: auth.aptible.com/ ports: "4000:4000"
links: - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_auth_development api: build: api.aptible.com/ ports: "4001:4001" links: - redis - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_api_development REDIS_URL: redis://redis APTIBLE_AUTH_ROOT_URL: http://docker:4000 redis: image: quay.io/aptible/redis:latest ports: "6379:6379" postgresql: image: quay.io/aptible/postgresql:aptible-seeds ports: "5432:5432"
Dev/prod parity via Docker Use the same service/image configuration in
production as in development (Docker Compose, Swarm, Kubernetes…)
Containerized SSL Infrastructure management made easy
Elastic Load Balancer (ELB) EC2 Instance EC2 Instance NGiNX NGiNX
TCP/HTTPS TCP/HTTPS HTTP HTTP
How to configure NGiNX with multiple dynamic upstreams? Chef? Salt?
Ansible?
ENV configuration $UPSTREAM_SERVERS
docker run -d -p 80:80 -p 443:443 \ -e UPSTREAM_SERVERS=docker:4000,docker:4001
\ quay.io/aptible/nginx:latest
docker run -d -p 80:80 -p 443:443 \ -e UPSTREAM_SERVERS=docker:4000,docker:4001
\ quay.io/aptible/nginx:latest
ENV configuration Makes testing easier
https://github.com/sstephenson/bats
# Dockerfile # Install and configure NGiNX... # ... ADD
test /tmp/test RUN bats /tmp/test https://github.com/aptible/docker-nginx Image: quay.io/aptible/nginx
#!/usr/bin/env bats # /tmp/test/nginx.bats @test "It should accept a list
of UPSTREAM_SERVERS" { simulate_upstream UPSTREAM_SERVERS=localhost:4000 wait_for_nginx run curl localhost 2>/dev/null [[ "$output" =~ "Hello World!" ]] }
#!/usr/bin/env bats # /tmp/test/nginx.bats @test "It should accept a list
of UPSTREAM_SERVERS" { simulate_upstream UPSTREAM_SERVERS=localhost:4000 wait_for_nginx run curl localhost 2>/dev/null [[ "$output" =~ "Hello World!" ]] }
@test "It should accept a list of UPSTREAM_SERVERS" { simulate_upstream
UPSTREAM_SERVERS=localhost:4000 wait_for_nginx run curl localhost 2>/dev/null [[ "$output" =~ "Hello World!" ]] } simulate_upstream() { nc -l -p 4000 127.0.0.1 < upstream-response.txt }
ENV configuration Abstracts implementation details: could be NGiNX, HAProxy, …
ENV configuration Simplifies configuration management: central store doesn’t need to
know parameters in advance
ENV configuration $UPSTREAM_SERVERS $FORCE_SSL $DISABLE_WEAK_CIPHER_SUITES (…)
Vulnerability Response Fix, test, docker push, restart
Heartbleed
Heartbleed POODLEbleed
Heartbleed POODLEbleed xBleed???
Integration Tests Document and test every vulnerability response
#!/usr/bin/env bats # /tmp/test/nginx.bats @test "It should pass an external
Heartbleed test" { install_heartbleed wait_for_nginx Heartbleed localhost:443 uninstall_heartbleed }
@test "It should pass an external Heartbleed test" { install_heartbleed
wait_for_nginx Heartbleed localhost:443 uninstall_heartbleed } install_heartbleed() { export GOPATH=/tmp/gocode export PATH=${PATH}:/usr/local/go/bin:${GOPATH}/bin go get github.com/FiloSottile/Heartbleed go install github.com/FiloSottile/Heartbleed }
Integration tests happen during each image build
Integration tests happen during each image build Images are built
automatically via Quay Build Triggers
None
Integration tests happen during each image build Build status is
easy to verify at a glance
None
Integration tests happen during each image build Quay Time Machine
lets us roll back an image to any previous state
None
Database Deployment Standardizing an "API" across databases
None
New databases mean new dependencies
New databases mean new dependencies How to document setup steps
for engineers?
New databases mean new dependencies How to deploy in production?
New databases mean new dependencies How to perform common admin
tasks? Backups? Replication? CLI access? Read-only mode?
Wrap Databases in a Uniform API Standardizing an "API" across
databases
#!/bin/bash # run-database.sh command="/usr/lib/postgresql/$PG_VERSION/bin/postgres -D "$DATA_DIRECTORY" -c config_file=/etc/postgresql/ $PG_VERSION/main/postgresql.conf" if
[[ "$1" == "--initialize" ]]; then chown -R postgres:postgres "$DATA_DIRECTORY" su postgres <<COMMANDS /usr/lib/postgresql/$PG_VERSION/bin/initdb -D "$DATA_DIRECTORY" /etc/init.d/postgresql start psql --command "CREATE USER ${USERNAME:-aptible} WITH SUPERUSER PASSWORD '$PASSPHRASE'" psql --command "CREATE DATABASE ${DATABASE:-db}" /etc/init.d/postgresql stop COMMANDS elif [[ "$1" == "--client" ]]; then [ -z "$2" ] && echo "docker run -it aptible/postgresql --client postgresql://..." && exit psql "$2" elif [[ "$1" == "--dump" ]]; then [ -z "$2" ] && echo "docker run aptible/postgresql --dump postgresql://... > dump.psql" && exit pg_dump "$2" elif [[ "$1" == "--restore" ]]; then [ -z "$2" ] && echo "docker run -i aptible/postgresql --restore postgresql://... < dump.psql" && exit psql "$2"
#!/bin/bash # run-database.sh command="/usr/lib/postgresql/$PG_VERSION/bin/postgres -D "$DATA_DIRECTORY" -c config_file=/etc/postgresql/ $PG_VERSION/main/postgresql.conf" if
[[ "$1" == "--initialize" ]]; then chown -R postgres:postgres "$DATA_DIRECTORY" su postgres <<COMMANDS /usr/lib/postgresql/$PG_VERSION/bin/initdb -D "$DATA_DIRECTORY" /etc/init.d/postgresql start psql --command "CREATE USER ${USERNAME:-aptible} WITH SUPERUSER PASSWORD '$PASSPHRASE'" psql --command "CREATE DATABASE ${DATABASE:-db}" /etc/init.d/postgresql stop COMMANDS elif [[ "$1" == "--client" ]]; then [ -z "$2" ] && echo "docker run -it aptible/postgresql --client postgresql://..." && exit psql "$2" elif [[ "$1" == "--dump" ]]; then [ -z "$2" ] && echo "docker run aptible/postgresql --dump postgresql://... > dump.psql" && exit pg_dump "$2" elif [[ "$1" == "--restore" ]]; then [ -z "$2" ] && echo "docker run -i aptible/postgresql --restore postgresql://... < dump.psql" && exit psql "$2"
--initialize: Initialize data directory --client: Start a CLI client --dump:
Dump database to STDOUT --restore: Restore from dump --readonly: Start database in RO mode
db-launch () { container=$(head -c 32 /dev/urandom | md5); passphrase=${PASSPHRASE:-foobar};
image="${@: -1}"; docker create --name $container $image docker run --volumes-from $container \ -e USERNAME=aptible -e PASSPHRASE=$passphrase \ -e DB=db $image --initialize docker run --volumes-from $container $@ } http://bit.ly/aptible-dblaunch
docker create --name $container $image http://bit.ly/aptible-dblaunch 1. Create "volume container"
docker run --volumes-from $container \ -e USERNAME=aptible -e PASSPHRASE=$passphrase \
-e DB=db $image --initialize http://bit.ly/aptible-dblaunch 2. Initialize database data volume
docker run --volumes-from $container $@ http://bit.ly/aptible-dblaunch 3. Run database
None
Thank you
@fancyremarker
[email protected]
https://speakerdeck.com/fancyremarker