Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Fast Rails API

Fast Rails API

History of optimizing Rails API, starting from AR connections pool and ending by using Fragment caching. Also there is list of tools for profiling applications (stackprof, etc.). All techniques are quite famous.

Anton Kaliaev

March 03, 2014
Tweet

More Decks by Anton Kaliaev

Other Decks in Programming

Transcript

  1. z

  2. # config/unicorn.rb worker_processes (ENV['UNICORN_WORKERS'] || 1).to_i # config/database.yml development: ...

    pool: 5 Connection pool https://devcenter.heroku.com/articles/concurrency-and-database-connections Unicorn worker Unicorn worker DB AR connection pool 1 connection per worker for max productivity
  3. # config/unicorn.rb before_fork do |server, worker| if defined?(ActiveRecord::Base) ActiveRecord::Base.connection.disconnect! end

    end after_fork do |server, worker| if defined?(ActiveRecord::Base) config = Rails.application.config.database_configuration[Rails.env] config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds config['pool'] = ENV['DB_POOL'] || 5 ActiveRecord::Base.establish_connection(config) end end Establish DB connection after fork https://devcenter.heroku.com/articles/concurrency-and-database-connections
  4. # config.ru if ENV['RAILS_ENV'] == 'production' require 'unicorn/worker_killer' max_request_min =

    500 max_request_max = 600 # Max requests per worker use Unicorn::WorkerKiller::MaxRequests, max_request_min, max_request_max oom_min = (240) * (1024**2) oom_max = (260) * (1024**2) # Max memory size (RSS) per worker use Unicorn::WorkerKiller::Oom, oom_min, oom_max end # Gemfile group :production do gem 'unicorn-worker-killer' end Unicorn worker killer
  5. Connection pool Rails-api Do not load ‘rails/all’ Establish DB connection

    after fork/ new thread was created Unicorn worker killer Summary REALY HELPFUL HELPFUL IMPROVES STABILITY
  6. Postgresql tuning http://postgresql.leopard.in.ua/html/ http://momjian.us/main/writings/pgsql/hw_performance/ PG Backend PG Backend Shared Buffer

    Cache Write-Ahead Log Query and checkpoint operations Transaction durability Kernel Disk Buffer Cache fsync Disk Blocks
  7. Oj (JSON dumper) # Gemfile gem 'oj' json 0.810000 0.020000

    0.830000 ( 0.841307) yajl 0.760000 0.020000 0.780000 ( 0.809903) oj 0.640000 0.010000 0.650000 ( 0.666230) Good idea to use the fastest dumper
  8. Active Model Serializers # Gemfile gem 'active_model_serializers' # app/serializers/user_serializer.rb class

    UserSerializer < ActiveModel::Serializer root false attributes :id, :guid, :current_channel_id, :current_telecast_ts, :created_at, :updated_at end # app/controllers/api/user_controller.rb def show @user = User.find params[:id] render json: @user end
  9. Fragment caching https://devcenter.heroku.com/articles/caching-strategies http://signalvnoise.com/posts/3113-how-key-based-cache-expiration-works # app/controllers/api/application_controller.rb class Api::ApplicationController < ActionController::API

    include ActionController::Caching # app/controllers/api/users_controller.rb def show @user = User.find params[:id] json = cache ['v1', @user] do ActiveModel::Serializer.build_json(self, @user, {}).to_json end render json: json end
  10. Fragment caching # app/controllers/api/users_controller.rb def index json = cache ['v1',

    CacheKeyRegistry.users] do @users = User.all ActiveModel::Serializer.build_json(self, @users, {}).to_json end render json: json end # lib/cache_key_registry.rb module CacheKeyRegistry class << self def users; key(User, __method__) end private def key(scope, key_prefix) count = scope.count max_updated_at = scope.maximum(:updated_at).try(:utc).try(:to_s, :number) "#{key_prefix}/all-#{count}-#{max_updated_at}" end end end “users/all-25-2014-02-23”
  11. Do not instantiate AR objects def self.lightning connection.select_all(select([:guid, :current_channel_id, :current_telecast_ts]).arel).each

    do |attrs| attrs.each_key do |attr| attrs[attr] = type_cast_attribute(attr, attrs) end end end
  12. $ stackprof stackprof-cpu-6196-1393432908.dump --text ================================== Mode: cpu(1000) Samples: 545 (0.00%

    miss rate) GC: 48 (8.81%) ================================== TOTAL (pct) SAMPLES (pct) FRAME 42 (7.7%) 39 (7.2%) #<Module:0x000000020550f8>.escape 33 (6.1%) 23 (4.2%) ActiveRecord::ConnectionAdapters::Column.new_time 21 (3.9%) 21 (3.9%) Set#include? 13 (2.4%) 13 (2.4%) block in ActiveSupport::Dependencies#search_for_file 13 (2.4%) 13 (2.4%) block in ActiveSupport::Dependencies#autoloadable_module? 26 (4.8%) 4 (0.7%) Channel::Telecast#cover_url Stackprof
  13. $ stackprof stackprof-cpu-6196-1393432908.dump --text --method 'Channel::Telecast#cover_url' Channel::Telecast#cover_url (/projects/undev/simpletv-backend/app/models/concerns/coverable.rb:12) samples: 4

    self (0.7%) / 26 total (4.8%) callers: 26 ( 100.0%) Channel::TelecastSerializer#cover callees (22 total): 22 ( 100.0%) Channel::Telecast#cover code: | 12 | def cover_url(version = nil) | 13 | if cover.file.nil? && external_cover 26 (4.8%) / 4 (0.7%) | 14 | external_cover.gsub(configus.cdn_host, '/cdn') Stackprof