Upgrade to Pro — share decks privately, control downloads, hide ads and more …

pkgr

 pkgr

Presented at London Ruby User Group

In the life of every* project there is this moment where standard deployment methods just don’t cut it. You have many servers, many applications, many developers. It can become a mess very, very quickly.

Have you ever dreamed of hosting your own apt repository? When was the last time you were annoyed about slow deployment times because the asset pipeline needs to run every single server your app runs on? Oh and lets not mention installing ruby version managers on servers, ok? (I will though).

In this talk I will show how packaging (and deploying) Ruby applications doesn’t have to be hard or time consuming and can work with a project of any size.

* - huge assumption warning!

Łukasz Korecki

March 09, 2015
Tweet

More Decks by Łukasz Korecki

Other Decks in Technology

Transcript

  1. pkgr Packaging Ruby applications with no sweat Łukasz Korecki |

    @lukaszkorecki | http://lukasz.korecki.me
  2. Agenda • Deploying Ruby apps • Why current approach doesn’t

    scale • How to package things? • Intro to pkgr • Next steps • Questions?
  3. How do you deploy? • Capistrano • Heroku • Fabric

    • Vlad • Pulsar • git-deploy • dsh + bash • Homegrown solution based on git, chef, shell and Pulsar • RSYNC BABY!
  4. Push based deployments • Developer runs a command • Stuff

    happens: • Repository gets updated/cloned on all machines • Dependencies are installed • Some tasks are being run (assets, migrations?) • Application restart • Hopefully it all works
  5. That’s only part of the story • Hopefully the machines

    have all you need • Are you using RVM/rbenv on your server? • System Ruby? • Separate users? One user? • Why assets are compiled on all machines? • Why git clone runs on all machines • Your deployment is disconnected from your configuration management tools • How do you share configuration? • What if your internet connection goes down? • Can your CI server deploy to staging? • Security • Build times
  6. Different approach • First requirement was to create (and recreate)

    integration testing environment • The system is mildly distributed so there’s quite a lot of moving parts (not on Netflix scale, but still) • A wild requirement appears! • How fast can we build a fresh environment?
  7. Different approach • I want to create a new machine

    and have it ready fast • It needs to not only have all required packages and also have my application deployed, setup and running
  8. A new frontier • Write code • Push to CI

    • Build package • Deploy via Puppet run
  9. Solutions • Read your distribution’s docs about packaging • Work

    out how to get your code packaged, installed, discoverable by the Ruby interpreter • I’m focusing on Ubuntu here, but since it’s a largely solved problem the rest of the talk should be relevant no matter which Linux distribution you're using • If you’re not using Linux… It still might be relevant
  10. Solutions • FPM • Not Ruby specific - can package

    anything to any format • Omnibus by Chef • “Package the world” approach • All dependencies included in the package • gem2deb • Does what it says on the tin and not more • Warbler for JRuby • Creates executable JAR/WAR files
  11. Solutions • pkgr • https://github.com/crohr/pkgr • Created by Cyril Rohr

    • Available as a service • http://packager.io
  12. pkgr • Combines: • Heroku Buildpacks for preparing the runtime

    for your application • https://devcenter.heroku.com/articles/buildpacks • fpm for packaging • fpm stands for F████ Package Managers • foreman (kinda)
  13. What do we get? • Our code is packaged •

    Dependencies are packaged • Assets can be packaged • Ruby interpreter of our choice is also packaged • MRI works OOTB • JRuby support has been added recently
  14. How do we get it? • Setup is similar to

    how Heroku works • Our application has to have a Procfile • That means that single package can provide a web service, job scheduler and a worker. Each can be started independently on different machines at different capacity • Configuration is provided via environment variables • In reality there is a config file, but it’s basically an shell script located in /etc/<app name>/conf.d/other • Once installed our application provides a system wide command which looks kinda like the one provided by Heroku
  15. OS integration • Once started our application can be controlled

    via service command • Thanks to foreman export upstart • Logs are redirected to /var/log/ • We can uninstall our code!
  16. Wrapper command • sudo app-name scale web=1 • Starts 1

    web process • sudo app-name config var=val • Sets configuration for the app via environment variables
  17. org::application { 'wu-tang-clan': version => '1.0.0', processes => [ 'web=1',

    'worker=2', 'scheduler=1' ], config => [ 'RACK_ENV=production', 'PORT=9060', 'WEB_CONCURRENCY=1', "DATABASE_URL=${database_url}", "REDIS_URL=${redis_url}", ] }
  18. org::application { 'wu-tang-clan': version => '1.0.0', processes => [ 'web=1',

    'worker=2', 'scheduler=1' ], config => [ 'RACK_ENV=production', 'PORT=9060', 'WEB_CONCURRENCY=1', "DATABASE_URL=${database_url}", "REDIS_URL=${redis_url}", ] } Install this version Start these processes Save this to the config file
  19. Other approaches • Sort of doable with Chef’s deployment resource

    • https://docs.chef.io/resource_deploy.html • Personally I had more trouble with it, but it might have improved
  20. Other approaches • Docker sort of does this too… but

    - it’s harder to orchestrate • Do you really need containers? What is the benefit? • Installation & distribution is still not a 100% solved problem • Running own registry is… Harder than I expected - registry itself is a container which brings its own challenges • How about running each app by a separate user and limiting resources that way
  21. Next steps • Setting up own apt repository hosted on

    S3 is really easy • https://github.com/krobertson/deb-s3 • Requires a bit of GPG/PGP magic • https://github.com/kyleshank/apt-transport-s3 • Use AWS credentials for authentication