Slide 1

Slide 1 text

Kevin, Between (VCNC) [email protected] Powering a Startup with Apache Spark #EUent8

Slide 2

Slide 2 text

4FPVM 4PVUI,PSFB

Slide 3

Slide 3 text

(BOHOBN )POHEBF *UBFXPO .ZVOHEPOH

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

CFUBVTFST SFMFBTF .EPXOMPBET .EPXOMPBET HMPCBMMBVODIFT #FUXFFO .EPXOMPBET #FUXFFO 4UBSUTNPOFUJ[BUJPO .EPXOMPBET (MPCBMFYQBOTJPO OFXCVTJOFTT UFBNPG

Slide 6

Slide 6 text

put your #assignedhashtag here by setting the footer in view-header/footer ,FWJO,JN • Came from Seoul, South Korea • Co-founder, used to be a product developer • Now a data analyst, engineer, team leader • Founder of Korea Spark User Group • Committer and PMC member of Apache Zeppelin 6

Slide 7

Slide 7 text

#FUXFFO%BUB5FBN 7

Slide 8

Slide 8 text

*OUSPUP#FUXFFO%BUB5FBN • Data engineer * 4 – Manager, engineer with various stack of knowledge and experience – Junior engineer, used to be a server engineer – Senior engineer, has lots of exps and skills – Data engineer, used to be a top level Android developer • Hiring data analyst and machine learning expert 8

Slide 9

Slide 9 text

#FUXFFO%BUB5FBNJTEPJOH • Analysis – Service monitoring – Analysis usage of new features and build product strategies • Data Infrastructure – Build and manage infrastructure – Spark, Zeppelin, AWS, BI Tools, etc • Third Party Management – Mobile Attribution Tools for marketing (Kochava, Tune, Appsflyer, etc) – Google Analytics, Firebase, etc – Ad Networks 9

Slide 10

Slide 10 text

#FUXFFO%BUB5FBNJTEPJOH • Machine Learning Study & Research – For the next business model • Support team – To build business, product, monetization strategies • Performance Marketing Analysis – Monitoring effectiveness of marketing budgets • Product Development – Improves client performance, server architecture, etc 10

Slide 11

Slide 11 text

11

Slide 12

Slide 12 text

1._ 12 Sunset @ Between Office

Slide 13

Slide 13 text

5FDIOPMPHJFT 13

Slide 14

Slide 14 text

3FRVJSFNFOUT • Big Data – 2TB/day of log data from millions of DAU – 20M of users • Small Team – Team of 4, need to support 50 • Tiny Budget – Company is just over BEP (Break Even Point) • Need very efficient tech stack! 14

Slide 15

Slide 15 text

8BZ8F8PSL • Use Apache Spark as a general processing engine • Scriptify everything with Apache Zeppelin • Heavy utilization of AWS and Spot instances to cut cost • Proper selection of BI Dashboard Tools 15

Slide 16

Slide 16 text

"QBDIF4QBSL (FOFSBM&OHJOF • Definitely the best way to deal with big data (as you all know!) • It’s performance, agility exactly meets startup requirements – Used Spark from 2014 • Great match with Cloud Service, especially with Spot instance – Utilizing burst nature of Cloud Service 16

Slide 17

Slide 17 text

4DSJQUJGZ&WFSZUIJOHXJUI;FQQFMJO • Doing everything on Zeppelin! • Daily batch tasks in form of Spark scripts (using Zeppelin scheduler) • Ad hoc analysis • Cluster control scripts • The world first user of Zeppelin! • More than 200 Zeppelin notebooks 17

Slide 18

Slide 18 text

"84$MPVE • Spot Instance is my friend! – Mostly use spot instance for analysis – only 10 ~ 20% of cost compare to on-demand instances • Dynamic cluster launch with Auto Scale – Launch clusters automatically for batch analysis – Manually launch more clusters on Zeppelin, with Auto Scale script – Automatically diminish clusters when no usage 18

Slide 19

Slide 19 text

#*%BTICPBSE5PPMT • Use Zeppelin as a dashboard using Spark SQL with ZEPL • Holistics (holistics.io) or Dash (plot.ly/products/dash/) 19

Slide 20

Slide 20 text

2VFTUJPOT$IBMMFOHFT 20

Slide 21

Slide 21 text

3%%"1*PS%BUB'SBNF"1* • Now Spark has very different style of APIs – Programmatic RDD API – SQL-like DataFrame, DataSet API • In case of having many, simple ad-hoc queries – DataFrame works • Having more complex, deep dive analytic questions – RDD works • For a while, mostly use RDD, DataFrame for ML or simple ad hoc tasks 21

Slide 22

Slide 22 text

4VTIJPS$PPLFE%BUB • Keeping data in a raw form as possible! – Doing ETL’s usually makes trouble, increasing management cost – The Sushi Principle (Joseph & Robert in Strata) – Drastically reduce operation & management cost – Apache Spark is a great tool for extracting insight from raw data 22 fresh data!

Slide 23

Slide 23 text

5P)JSF%BUB"OBMZTUPS/PU • For data analyst, expected skill set are.. – Excel, SQL, R, .. • Those skills are not expected.. – Programatic API like Spark RDD – Cooking raw data • Prefer data engineer with analytic skills • May need to add some ETL tasks to work with data analyst 23

Slide 24

Slide 24 text

#FUUFS 'BTUFS5FBN4VQQPSU • Better - Zeppelin is great for analyzing data, but not enough for sharing data for team – We have really few alternatives – Increase of using BI dashboard tools? – Still finding a good way • Faster - Launching a Spark cluster takes few minutes – Not bad, but we want it faster – Google BigQuery or AWS Athena – SQL Database with ETL 24

Slide 25

Slide 25 text

'VUVSF1MBO • Prepare for exploding # of data operations! – Team is growing, business is growing – # of tasks – # of 3rd party data products – Communication cost • Operations with machine learning & deep learning – Better way to manage task & data flow 25

Slide 26

Slide 26 text

-FUsTXSBQVQ 26

Slide 27

Slide 27 text

8IBU.BUUFSTGPS6T • Support Team – Each Team should see proper data and make good decision from it – Regular meetings, fast response to adhoc data request – Ultimately, our every activity should be related to company’s business • Technical Lead – Technical investments for competence of both company and individual – Working in Between should be a best experience for each individuals • Social Impact – Our activity on work has valuable impact for society? – Open source, activity on community 27

Slide 28

Slide 28 text

)PX"QBDIF4QBSLJT1PXFSJOHB4UBSUVQ • One great tool for general purpose – Daily batch tasks – Agile, adhoc analysis – Drawing dashboard – Many more.. • Helps saving time, reducing cost of data operations • Great experience for engineer and analyst • Sharing know-how’s to / from community 28

Slide 29

Slide 29 text

8PSLBTBEBUBFOHJOFFSBU4UBSUVQ • Fascinating, fast evolution of tech • Need hard work and labor • Data work will shine only when it is understood and used by teammates 29 Two Peasants Digging, Vincent van Gogh Two Men Digging, Jean-Francois Millet

Slide 30

Slide 30 text

5IBOLZPV 30