About me
● Carlos Justiniano
● VP of Engineering at Flywheel Sports
● 2005 World Record in Distributed
Computation
● Leveraging the power of Redis since 2011
@cjus on Github, Twitter, Medium and at flywheelsports.com
More about me at http://cjus.me
Slide 3
Slide 3 text
What we’ll cover today
A case study involving massive file
transfers, Redis, Microservices, job
creation and orchestration and
Serverless computing using AWS
Lambda
Slide 4
Slide 4 text
No content
Slide 5
Slide 5 text
No content
Slide 6
Slide 6 text
No content
Slide 7
Slide 7 text
No content
Slide 8
Slide 8 text
No content
Slide 9
Slide 9 text
Broadcasting
live
from NYC
Slide 10
Slide 10 text
No content
Slide 11
Slide 11 text
No content
Slide 12
Slide 12 text
No content
Slide 13
Slide 13 text
Our video content
transfer use case
Slide 14
Slide 14 text
Challenges
Quickly migrate our entire video
library from one CDN to another:
● Object Storage
● HTTP Live Streaming (HLS)
● Ensuring no file is left behind
Slide 15
Slide 15 text
No content
Slide 16
Slide 16 text
Begins with a live broadcast
Ends with an at-home rider
Slide 17
Slide 17 text
Multiple manifests files
pointing to collections
of file segments
Manifest file Segment files
2000 classes
16 streams per class
~500 file* segments per stream
*each file segment ranges from
100 bytes to 2 megabytes in
size
Slide 26
Slide 26 text
2000 classes
16 streams per class
~500 file* segments per stream
2000 x 16 x 500 =
16,000,000 files
*each file segment ranges from
100 bytes to 2 megabytes in
size
Slide 27
Slide 27 text
Our Solution
● Pull individual files through the
Verizon CDN
● Web crawling manifest files
● Use Redis powered Microservices
to orchestrate millions of AWS
Lambda invocations
Slide 28
Slide 28 text
No content
Slide 29
Slide 29 text
Fly Live Ants
Slide 30
Slide 30 text
No content
Slide 31
Slide 31 text
No content
Slide 32
Slide 32 text
Redis Messaging and
Job Queuing
Slide 33
Slide 33 text
No content
Slide 34
Slide 34 text
No content
Slide 35
Slide 35 text
Class scanner code
Slide 36
Slide 36 text
No content
Slide 37
Slide 37 text
Crawler code
Slide 38
Slide 38 text
No content
Slide 39
Slide 39 text
No content
Slide 40
Slide 40 text
Segment-transfer code
Slide 41
Slide 41 text
No content
Slide 42
Slide 42 text
No content
Slide 43
Slide 43 text
No content
Slide 44
Slide 44 text
λ code
Slide 45
Slide 45 text
No content
Slide 46
Slide 46 text
No content
Slide 47
Slide 47 text
Seeing the solution
in action
Slide 48
Slide 48 text
Completed segment
In progress segment
Slide 49
Slide 49 text
No content
Slide 50
Slide 50 text
End Results
Slide 51
Slide 51 text
End results
● The speed of transferring files
using this approach is absolutely
staggering.
● During earlier tests, the system
transferred four terabytes of data
in two hours and twenty minutes!
● That’s roughly 523MB per
second!
● Nowhere near the maximum
potential.
● Using both a larger multi-core or
cluster of multi-core machines
and a higher concurrent limit of
lambda invocations would yield
even higher transfer speeds.