Lock in $30 Savings on PRO—Offer Ends Soon! ⏳

Using Databases to pull your application weight

Using Databases to pull your application weight

The talk is about utilizing your database more so that you can offload lot of CPU intensive tasks from ruby to rails to make your application run faster.

Harisankar P S

December 02, 2016
Tweet

More Decks by Harisankar P S

Other Decks in Programming

Transcript

  1. { “name” => "Harisankar P S", “email” => ”[email protected]", “twitter”

    => "coderhs", “facebook" => "coderhs", “github” => "coderhs" “bio” => “ I write ruby code for a living.” }
  2. I am from the city of Kochi In the State

    of Kerala In India Namaskaram in malaylam
  3. നമsാരം How we greet in Malayalam Fun Fact: There is

    22 official languages in India,1653 spoken language. And over 50,000 Dialects And I know three of them English, Malayalam and Tamil I am from Huge and Diverse country =)
  4. I brought some taste from Kerala Come meet me after

    my talk if I might share it with you.
  5. This talk is about how we can offload couple of

    the jobs done by Rails to Database. You have a hulk then don’t feel scared to USE it.
  6. Today we are going to talk about • Query Planner

    • Indexing • Materialised Views • Generating JSON
  7. Question • SQL syntax is all about how the results

    should be • What you want in your result - SELECT id, name • Or some information about data like - SELECT average(price), max(price), min(price) Where is the decision on how the data should be fetch made.
  8. Well thats what Query planner is all about. Its the

    brain of a DB We need to understand how the system work before we can improve its performance
  9. • A query plan is created by the DB before

    the query you gave is executed. • Plan is the cost of running the query. The DB chooses the one with the least cost. • Query Plan assumes the plan it has is the ideal one
  10. Truth is, It can’t. A DB doesn’t know what all

    scenarios its put under, its upto us to nudge and optimise it.
  11. So we need to see what the query planner see

    Active Record has .explain method to help us there
  12. So we check the query plan find where we are

    slowing down and then fix them and make the plan choose the faster method.
  13. I have done all these in production =), so you

    don’t to feel scared to run this.
  14. • Indexes are a special lookup table that the database

    search engine can use to speed up data retrieval. • An Index is like a pointer to a particular row of a table. Where all the fields in the table are ordered. Database is smart even if you have indexes if it find the sequential search to be cost less then it would go for that one.
  15. We should Index • Index Primary key • Index Foreign

    key • Index all columns you would be passing into where clause • Index the keys used to Join tables • Index the date column (if you are going to call it frequent, like rankings of a particular date) • Index the type column in an STI or polymorphism. • Add partial index to scopes
  16. Do not Index • Do not index tables with a

    lot of read, write • Do not index tables you know that will remain small, all through out its life time • Do not index columns where you will be manipulating lot of its values.
  17. Database views? Database views are like the view in our

    rails. A rails view(an html page) shows data from multiple model in a single page Similarly we can show data from multiple table as a single table using the concept called views Why would we do that? Because it makes life easier
  18. Instead of doing Every time you want the managers SELECT

    id, name, email FROM companies where role=‘manager’
  19. CREATE VIEW company_managers AS SELECT id, name, email FROM companies

    WHERE role='manager'; You can create a view And simple do SELECT * FROM company_managers;
  20. Note: • A schema of view lives in memory of

    a DB • The result is not stored in memory • Its is actually running our query to get the results • They are called pseudo tables
  21. Materialised views are the next evolution of Database views. We

    store the result as well in a table • This was first introduced by Oracle • But now found in PostgreSQL, MicrosoftSQL, IBM DB2, etc. • MySQL doesn’t have it you can create it using open source extensions.
  22. How can we use it in Ruby? Thanks to ActiveRecord

    its easy to access such pseudo tables
  23. Create a migration to record the Materialised view We need

    a bit of SQL here class CreateAllTimesSalesMatView < ActiveRecord::Migration def up execute <<-SQL CREATE MATERIALIZED VIEW all_time_sales_mat_view AS SELECT sum(amount) as total_sale, DATE_TRUNC('day', invoice_adte) as date_of_sale FROM sales GROUP BY DATE_TRUNC('day', invoice_adte) SQL end def down execute("DROP MATERIALIZED VIEW IF EXISTS all_time_sales_view") end end
  24. Create Active Record model I place these views at the

    location app/models/views class AllTimeSalesMatView < ActiveRecord::Base self.table_name = 'all_time_sales_mat_view' def readonly? true end def self.refresh ActiveRecord::Base.connection.execute('REFRESH MATERIALIZED VIEW CONCURRENTLY all_time_sales_mat_view') end end
  25. First, Last and Find • They don’t work in your

    view as they operate on your tables primary key and a view doesn’t have it • If you want to use it then you need to one of the fields in your table as primary key class Model < ActiveRecord::Base self.primary_key = :id end
  26. Benchmark • I created a table with 1 million random

    sales and random dates in a year. (Dates where bookmarked as well)
  27. Take Away • Faster to fetch data. • Capture commonly

    used joins & filters. • Push data intensive processing from Ruby to Database. • Allow fast and live filtering of complex associations or calculation .fields. • We can index various fields in the table.
  28. Pain Points • We need to write SQL • We

    will be using more RAM and Storage • Requires Postgres 9.3 for MatView • Requires Postgres 9.4 to refresh concurrently • Can’t have Live data • You can fix this by creating your own table and 
 updating it with the latest information
  29. • Websites with simple HTML and plain javascript based AJAX

    is coming to an end • Its the era of new modern day JS frameworks • JSON is the glue that binds the fronted and our backend • So its natural to find more and more DB supporting the generation and storage of JSON.
  30. To convert a single row to JSON select row_to_json(users) from

    users where id = 1 we use row_to_json() method in SQL
  31. But for more practical use we write queries like select

    row_to_json(results) from ( select id, email from users ) as results {"id":1,"email":"[email protected]"}
  32. A more complex one select row_to_json(result) from ( select id,

    email, ( select array_to_json(array_agg(row_to_json(user_projects))) from ( select id, name from projects where user_id=users.id order by created_at asc ) user_projects ) as projects from users where id = 1 ) result
  33. { “id":1,"email":"[email protected]", "project":["id": 3, "name": “CSnipp"] } We did data

    preloading as well, instead of having the need to run another query separate from the first one. We got the data about projects as well.
  34. json_build_object • Added in PostgreSQL 9.4 to make JSON creation

    a bit more simpler select json_build_object('foo',1,'bar',2); {"foo": 1, "bar": 2}
  35. • For simple JSON creation you can use a gem

    called Surus • https://github.com/jackc/surus
  36. Which lets you write code like User.find_json 1 User.find_json 1,

    columns: [:id, :name, :email] Post.find_json 1, include: :author User.find_json(user.id, include: {posts: {columns: [:id, :subject]}}) User.all_json User.where(admin: true).all_json User.all_json(columns: [:id, :name, :email], include: {posts: {columns: [:id, :subject]}}) Post.all_json(include: [:forum, :post])
  37. But Like me if you want to keep as much

    stuff as possible in Ruby then. Create a materialised view for your complicated query And then use the gem to generate JSON =)
  38. Benchmarks • In our case we saw request to a

    (.json) url which used to take 2 seconds, coming down to <= 200ms • Some benchmarks I found online mentions
  39. • PostgreSQL sacrifices speed for durability and reliability • PostgreSQL

    is known for its slow writes and faster readers • It has slow writes as it waits for confirmation that what we inserted has been recorded to the Hard Disk. • You can disable this confirmation check to speed up your inserts if you are inserting a lot of rows every second
  40. • Only issue now, is incase your DB crash it

    can’t recover the lost data not saved to Hard Disk • It won’t corrupt the data, but you might loose some rows of your data • Not to be used in cases when you want data integrity to be 100% • Use it where you don’t mind loosing some information or where you can rebuild it from outside your DB. Like logs, or raw information.
  41. • Index data so that we don’t end up scanning

    the whole DB • Simplify the way you fetch data from the DB using views • Move complicated JSON generation to the Databases • Disable synchronous commit when you feel like it won’t cause a problem
  42. Conclusions • Know your tech stack • We should have

    control over all our moving parts • Try to bring about the best with your tech stack before you start throwing more money at it • SQL has been around for 40 years and its planning to say for a while longer =) • There is no golden rule. What worked for me might not work for your specific use case.
  43. I blogged about this in detail. • http://blog.redpanthers.co/materialized-views- caching-database-query/ •

    http://blog.redpanthers.co/create-json-response- using-postgresql-instead-rails/ • http://blog.redpanthers.co/different-types-index- postgresql/ • http://blog.redpanthers.co/optimising-postgresql- database-query-using-indexes/