History of optimizing Rails API, starting from AR connections pool and ending by using Fragment caching. Also there is list of tools for profiling applications (stackprof, etc.). All techniques are quite famous.
pool: 5 Connection pool https://devcenter.heroku.com/articles/concurrency-and-database-connections Unicorn worker Unicorn worker DB AR connection pool 1 connection per worker for max productivity
end after_fork do |server, worker| if defined?(ActiveRecord::Base) config = Rails.application.config.database_configuration[Rails.env] config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds config['pool'] = ENV['DB_POOL'] || 5 ActiveRecord::Base.establish_connection(config) end end Establish DB connection after fork https://devcenter.heroku.com/articles/concurrency-and-database-connections
500 max_request_max = 600 # Max requests per worker use Unicorn::WorkerKiller::MaxRequests, max_request_min, max_request_max oom_min = (240) * (1024**2) oom_max = (260) * (1024**2) # Max memory size (RSS) per worker use Unicorn::WorkerKiller::Oom, oom_min, oom_max end # Gemfile group :production do gem 'unicorn-worker-killer' end Unicorn worker killer
include ActionController::Caching # app/controllers/api/users_controller.rb def show @user = User.find params[:id] json = cache ['v1', @user] do ActiveModel::Serializer.build_json(self, @user, {}).to_json end render json: json end