Slide 1

Slide 1 text

Rails and the Internet of Things Miami Ruby Brigade

Slide 2

Slide 2 text

Rails and IoT • What is the Internet of Things? • Code for Miami Flood Tracker • IoT and Scaffolds • All This Data • Ingest and Rollup • Porting to Rails 6

Slide 3

Slide 3 text

What is the Internet of Things? • Connecting a thing that wasn’t always on the internet to the internet • Making a computer with no real UI do internet stuff

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

Solar cell Power converter Microcontroller & cell uplink Range sensor

Slide 7

Slide 7 text

Code for Miami Flood Tracker • Connect • 5 times: • Check range • Sleep 1s • Report median range • Report all five ranges • Sleep

Slide 8

Slide 8 text

Code for Miami Flood Tracker • Particle Electron • Microcontroller with cell hardware • Particle Cloud • Interacts with microcontrollers • Web hooks

Slide 9

Slide 9 text

Long-Term Goals • Sign up for alerts • See useful graphs • Figure out where to focus resources

Slide 10

Slide 10 text

Short-Term Goals • Are the numbers making sense? • How often can I measure? • Can I bank enough power for storms?

Slide 11

Slide 11 text

Different Goals • Web hooks mean I can write software to solve my problems without having to answer for someone else's

Slide 12

Slide 12 text

Web Hooks and Scaffolds • They’re mostly great!

Slide 13

Slide 13 text

Scaffold Params { "utf8"=>"✓", "authenticity_token"=>"AUTHENTICITY_TOKEN", "spark"=>{ "event"=>"asdf", "data"=>"asdf", "coreid"=>"asdf", "published_at(1i)"=>"2019", "published_at(2i)"=>"5", "published_at(3i)"=>"10", "published_at(4i)"=>"01", "published_at(5i)"=>"37" }, "commit"=>"Create Spark" }

Slide 14

Slide 14 text

Particle Params { "event"=>"floodtracker/battery", "data"=>"29.113281", "published_at"=>"2019-05-10T01:27:11.479Z", "coreid"=>"400036001751353338363036", “key”=>”NOT_THE_REAL_KEY” }

Slide 15

Slide 15 text

Param Differences? • Don’t require spark and permit the fields params. require(:spark). permit(:event, :data, :coreid, :published_at) • Just require the fields params.permit(:event, :data, :coreid, :published_at))

Slide 16

Slide 16 text

Published at? • Rails is smart, it figures it out

Slide 17

Slide 17 text

Authenticity Token? • If an attacker can make an authed request, they've already won, who cares lol

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

Web Hook Authentication • Twitter has complex bidirectional authentication • Twitter authenticates the receiver • The receiver can authenticate Twitter

Slide 20

Slide 20 text

Web Hook Authentication • Particle doesn’t • You can give Particle a URL • You can tell Particle to send arbitrary request headers

Slide 21

Slide 21 text

Web Hook Authentication class SparksController < ApplicationController # … def create if params[:key] != ENV['KEY'] return render text: 'idk', status: 403 end @spark = Spark.new(params.permit(:event, :data, :coreid, :published_at)) # …

Slide 22

Slide 22 text

Web Hook Authentication bkerley@gunderson ★ 2.5.1 ~/Documents/floodtracker-watcher master > heroku config:get KEY NOT_THE_REAL_KEY

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

sry 4 excel

Slide 25

Slide 25 text

All This Data 10000 row limit 230 rows per day ≈ 43 days

Slide 26

Slide 26 text

All This Data • Should we… • Pay more? • Be smarter about what we store for how long?

Slide 27

Slide 27 text

Paying More Is A Good Option • If you’re a business with revenue, it’s often the cheapest option • 10M rows (43,000 days, or 430 days for 100 units) is $9/ month • How much dev time do you get for $9?

Slide 28

Slide 28 text

Paying More Isn’t Always an Option • Community group with a small money budget and sporadic labor • If I spend a couple evenings sorting it out indefinitely that’s a good thing to do

Slide 29

Slide 29 text

Ingest and Rollup • Ingest • Taking in data • Rollup • Aggregating data

Slide 30

Slide 30 text

Ingest and Rollup • We're kind of already doing this • Ingest into a single table • Rollup by deleting old records when Heroku emails me

Slide 31

Slide 31 text

Ingesting Better • Classify records • Parse them differently • Store them differently

Slide 32

Slide 32 text

Ingesting Better • Switch on event name • Initialize models with event content

Slide 33

Slide 33 text

Models • Quip • String • Store if different than the previous one for a given coreid • Maps to a firmware epoch

Slide 34

Slide 34 text

Models • Level (millimeters) • Integer • Store every reading for a week • Roll up with first, min, max, stddev, mean

Slide 35

Slide 35 text

Models • Battery • float • Store every reading for a week • Roll up with first, min, max, stddev, mean • Might be interesting to correlate with weather?

Slide 36

Slide 36 text

Models • Level_raws • array<float> • Store every reading for a week • (this is mostly debug)

Slide 37

Slide 37 text

Models • SleepPlan (seconds) • Integer • Store a week • Roll up sum, count • Debug and UI

Slide 38

Slide 38 text

Models • SparkDiagnostic • json • Store a week

Slide 39

Slide 39 text

Let's Code • Starting from rev 6a3f6cf2

Slide 40

Slide 40 text

Easy One First 5037a63 update rails

Slide 41

Slide 41 text

More Scaffolds 5de9cad add coreid and published_at to tables a1149fd spark_diagnostics scaffold 1846878 sleep_plans scaffold e3d139b level_raws scaffold 4afa16a battery scaffold 20e2369 level scaffold e814ab5 rm level controller f306a4f add restore instructions d4135cb quip scaffold

Slide 42

Slide 42 text

Ingesting Into Separate Models class Ingester EVENT_MAPPING = { "floodtracker/sleep_plan" => SleepPlan, … } def self.ingest(params) event_klass = params[:event] if event_klass.nil? event_klass = Spark end event_klass.ingest(params) end end

Slide 43

Slide 43 text

Ingesting Into Separate Models class Battery < ApplicationRecord def self.ingest(params) create( reading: params[:data], coreid: params[:coreid], published_at: params[:published_at] ) end end

Slide 44

Slide 44 text

Backfilling with Old Data class CopyBatteries < ActiveRecord::Migration[5.2] def up Battery.connection.execute(<<-SQL) INSERT INTO batteries (reading, coreid, published_at, created_at, updated_at) (SELECT CAST(data AS float), coreid, published_at, now(), now() FROM sparks WHERE event = 'floodtracker/battery' ); SQL end end

Slide 45

Slide 45 text

Bunch of Work 80e19c4 add coreid and published_at to independent views 18ead20 migrate backlog data to independent tables 8f2add1 migrate levels be1119a yeah i'm good at programming c8b131c trying solargraph f155674 fix sparks route 3d021f0 fix ingest route 0bd367f ingest into sparks if nothing else 89db820 ingest controller 4d08822 ingest models 0de6c59 remove create, update, and destroy from new scaffolds

Slide 46

Slide 46 text

Implementing Rollup • Quip: really should be rolled up at insert • LevelRaws, SparkDiagnostic: just delete • Level, Battery, SleepPlan: roll up into new table

Slide 47

Slide 47 text

Quip Rollup SELECT * FROM (SELECT id, published_at, body, LAG(body) OVER (ORDER BY published_at ASC) AS prev FROM quips) AS hist WHERE body IS DISTINCT FROM prev ORDER BY published_at DESC; id | published_at | body | prev -----+----------------------------+-----------------------+------------------- 227 | 2019-04-07 10:47:45.272-04 | allegory of nick cave | oodles and oodles 59 | 2019-03-31 20:06:42.017-04 | oodles and oodles | (2 rows)

Slide 48

Slide 48 text

Quip Rollup id | coreid | published_at | body | prev -----+--------+-------------------------------+-----------------------+----------------------- 459 | 12345 | 2019-05-17 15:13:18.910431-04 | test quip | allegory of nick cave 33 | 40003 | 2019-05-08 15:37:59.254-04 | allegory of nick cave | test quip 460 | 12345 | 2019-05-08 15:14:57.069987-04 | test quip | allegory of nick cave 227 | 40003 | 2019-04-07 10:47:45.272-04 | allegory of nick cave | oodles and oodles 59 | 40003 | 2019-03-31 20:06:42.017-04 | oodles and oodles | (5 rows)

Slide 49

Slide 49 text

Quip Rollup • Need to check against coreid • Could loop over them during The Big Rollup • But there’s only one real one in the db so #yolo • Need to check at insert time

Slide 50

Slide 50 text

Quip Big Rollup DELETE FROM quips WHERE id IN (SELECT id FROM (SELECT id, coreid, published_at, body, LAG(body) OVER (ORDER BY coreid DESC, published_at ASC) AS prev FROM quips) AS hist WHERE body IS not DISTINCT FROM prev ORDER BY published_at DESC, coreid DESC );

Slide 51

Slide 51 text

Quip Rollup at Insert class Quip < ApplicationRecord def self.ingest(params) existing = self. where(coreid: params[:coreid]). order(published_at: :desc). first return existing if existing.body == params[:data] self.create( body: params[:data], coreid: params[:coreid], published_at: params[:published_at] ) end end

Slide 52

Slide 52 text

Just Delete class SparkDiagnostic < ApplicationRecord def self.rollup where("published_at < (now() - '7 day'::interval)"). delete_all end end

Slide 53

Slide 53 text

Summary Rollups SELECT i.*, t.reading AS first FROM ( SELECT windows.date, s.coreid, COUNT(s.reading), MIN(s.published_at) AS first_publish, MIN(s.reading), AVG(s.reading), STDDEV(s.reading), MAX(s.reading) FROM (SELECT DATE(generate_series( (SELECT MIN(DATE(published_at)) FROM levels), (SELECT MAX(DATE(published_at)) FROM levels), '1 day')) AS date) AS windows RIGHT JOIN

Slide 54

Slide 54 text

Summary Rollups • sorry for bamboozling you with sql • Use `generate_series` to turn start and end timestamps into a list of dates • Use COUNT, MIN, MAX, AVG, STDDEV to get aggregates • Use MIN(published_at) to find first publish, to get a first value for the day (for candlestick charts)

Slide 55

Slide 55 text

Summary Rollups • Do the same for Batteries, SleepPlans

Slide 56

Slide 56 text

Time Zones • What's the difference between "timestamp" and “timestamptz” anyways? # \d batteries Table "public.batteries" Column | Type --------------+---------------------------- published_at | timestamp with time zone created_at | timestamp without time zone # select published_at, created_at from batteries […]; published_at | created_at ----------------------------+---------------------------- 2019-05-20 16:21:32.079-04 | 2019-05-20 20:21:32.298862

Slide 57

Slide 57 text

Time Zones • Basically, we want to use `published_at`’s knowledge of time zones to scope our dates by irl Miami time • Configuring “America/New_York” in the rollups does that

Slide 58

Slide 58 text

Rollups e6e8a2b battery_histories timestamptz f917ed3 battery rollup c18bb63 battery history scaffold 165965e rollups are hard-coded to miami time ca46abf sleep plan histories scaffold c1e4193 start sleep plan rollup e3cce56 fix issues with level rollup a28bb6b hose out old levels 5f06949 level_histories scaffold 39e122d start level rollup dd5b3d6 update ruby c8c2560 roll up level_raws and spark_diagnostics 91608f2 don't store redundant quips

Slide 59

Slide 59 text

Automating Rollup > rails g task rollup daily Running via Spring preloader in process 44030 create lib/tasks/rollup.rake

Slide 60

Slide 60 text

lib/tasks/rollup.rake namespace :rollup do desc "Do daily aggregation and cleaning" task daily: :environment do [SparkDiagnostic, Level, LevelRaw, SleepPlan, Battery].each do |m| m.rollup end Battery.connection.execute(<<-SQL) VACUUM FULL ANALYZE; SQL end end

Slide 61

Slide 61 text

Scheduling Rollup • Heroku scheduler

Slide 62

Slide 62 text

Phew! ea0a19d gem updates a0ce9a6 rollup rake task c09897f remove changes from history controllers