Slide 1

Slide 1 text

Helping Travellers Make Better Hotel Choices 500 Million Times a Month Miguel Cabrera @mfcabrera https://www.flickr.com/photos/18694857@N00/5614701858/

Slide 2

Slide 2 text

ABOUT ME

Slide 3

Slide 3 text

•  Neuberliner •  Ing. Sistemas e Inf. Universidad Nacional - Med •  M.Sc. In Informatics TUM, Hons. Technology Management. •  Work for TrustYou as Data (Scientist|Engineer| Juggler)™ •  Founder and former organizer of Munich DataGeeks ABOUT ME

Slide 4

Slide 4 text

TODAY

Slide 5

Slide 5 text

•  What we do •  Architecture •  Technology •  Crawling •  Textual Processing •  Workflow Management and Scale •  Sample Application AGENDA

Slide 6

Slide 6 text

WHAT WE DO

Slide 7

Slide 7 text

For every hotel on the planet, provide a summary of traveler reviews.

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

No content

Slide 10

Slide 10 text

No content

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

No content

Slide 13

Slide 13 text

No content

Slide 14

Slide 14 text

No content

Slide 15

Slide 15 text

•  Crawling •  Natural Language Processing / Semantic Analysis •  Record Linkage / Deduplication •  Ranking •  Recommendation •  Classification •  Clustering Tasks

Slide 16

Slide 16 text

ARCHITECTURE

Slide 17

Slide 17 text

Data Flow Crawling   Seman-c   Analysis    Database   API   Clients • Google • Kayak+ • TY Analytics

Slide 18

Slide 18 text

Batch Layer • Hadoop • Python • Pig* • Java* Service Layer • PostgreSQL • MongoDB • Redis • Cassandra DATA DATA Hadoop Cluster Application Machines Stack

Slide 19

Slide 19 text

SOME NUMBERS

Slide 20

Slide 20 text

25 supported languages

Slide 21

Slide 21 text

500,000+ Properties

Slide 22

Slide 22 text

30,000,000+ daily crawled reviews

Slide 23

Slide 23 text

Deduplicated against 250,000,000+ reviews

Slide 24

Slide 24 text

300,000+ daily new reviews

Slide 25

Slide 25 text

https://www.flickr.com/photos/22646823@N08/2694765397/ Lots of text

Slide 26

Slide 26 text

TECHNOLOGY

Slide 27

Slide 27 text

•  Numpy •  NLTK •  Scikit-Learn •  Pandas •  IPython / Jupyter •  Scrapy Python

Slide 28

Slide 28 text

•  Hadoop Streaming •  MRJob •  Oozie •  Luigi •  … Python + Hadoop

Slide 29

Slide 29 text

Crawling

Slide 30

Slide 30 text

Crawling

Slide 31

Slide 31 text

No content

Slide 32

Slide 32 text

No content

Slide 33

Slide 33 text

No content

Slide 34

Slide 34 text

No content

Slide 35

Slide 35 text

•  Build your own web crawlers •  Extract data via CSS selectors, XPath, regexes, etc. •  Handles queuing, request parallelism, cookies, throttling … •  Comprehensive and well-designed •  Commercial support by http://scrapinghub.com/

Slide 36

Slide 36 text

No content

Slide 37

Slide 37 text

No content

Slide 38

Slide 38 text

No content

Slide 39

Slide 39 text

No content

Slide 40

Slide 40 text

•  2 - 3 million new reviews/week •  Customers want alerts 8 - 24h after review publication! •  Smart crawl frequency & depth, but still high overhead •  Pools of constantly refreshed EC2 proxy IPs •  Direct API connections with many sites Crawling at TrustYou

Slide 41

Slide 41 text

•  Custom framework very similar to scrapy •  Runs on Hadoop cluster (100 nodes) •  Not 100% suitable for MapReduce •  Nodes mostly waiting •  Coordination/messaging between nodes required: –  Distributed queue –  Rate Limiting Crawling at TrustYou

Slide 42

Slide 42 text

Text Processing

Slide 43

Slide 43 text

Text Processing Raw  text   Setence   spli:ng   Tokenizing   Stopwords   Stemming Topic Models Word Vectors Classification

Slide 44

Slide 44 text

Text Processing

Slide 45

Slide 45 text

•  “great rooms” •  “great hotel” •  “rooms are terrible” •  “hotel is terrible” Text Processing JJ NN JJ NN NN VB JJ NN VB JJ >> nltk.pos_tag(nltk.word_tokenize("hotel is terrible")) [('hotel', 'NN'), ('is', 'VBZ'), ('terrible', 'JJ')]

Slide 46

Slide 46 text

•  25+ languages •  Linguistic system (morphology, taggers, grammars, parsers …) •  Hadoop: Scale out CPU •  ~1B opinions in the database •  Python for ML & NLP libraries Semantic Analysis

Slide 47

Slide 47 text

Word2Vec/Doc2Vec

Slide 48

Slide 48 text

Group of algorithms

Slide 49

Slide 49 text

An instance of shallow learning

Slide 50

Slide 50 text

Feature learning model

Slide 51

Slide 51 text

Generates real-valued vectors represenation of words

Slide 52

Slide 52 text

“king” – “man” + “woman” = “queen”

Slide 53

Slide 53 text

Word2Vec Source:  h*p://technology.s4tchfix.com/blog/2015/03/11/word-­‐is-­‐worth-­‐a-­‐thousand-­‐vectors/  

Slide 54

Slide 54 text

Word2Vec Source:  h*p://technology.s4tchfix.com/blog/2015/03/11/word-­‐is-­‐worth-­‐a-­‐thousand-­‐vectors/  

Slide 55

Slide 55 text

Word2Vec Source:  h*p://technology.s4tchfix.com/blog/2015/03/11/word-­‐is-­‐worth-­‐a-­‐thousand-­‐vectors/  

Slide 56

Slide 56 text

Word2Vec Source:  h*p://technology.s4tchfix.com/blog/2015/03/11/word-­‐is-­‐worth-­‐a-­‐thousand-­‐vectors/  

Slide 57

Slide 57 text

Word2Vec Source:  h*p://technology.s4tchfix.com/blog/2015/03/11/word-­‐is-­‐worth-­‐a-­‐thousand-­‐vectors/  

Slide 58

Slide 58 text

Word2Vec Source:  h*p://technology.s4tchfix.com/blog/2015/03/11/word-­‐is-­‐worth-­‐a-­‐thousand-­‐vectors/  

Slide 59

Slide 59 text

Similar words/documents are nearby vectors

Slide 60

Slide 60 text

Wor2vec offer a similarity metric of words

Slide 61

Slide 61 text

Can be extended to paragraphs and documents

Slide 62

Slide 62 text

A fast Python based implementation available via Gensim

Slide 63

Slide 63 text

No content

Slide 64

Slide 64 text

Workflow Management and Scale

Slide 65

Slide 65 text

Crawl   Extract   Clean   Stats   ML   ML   NLP  

Slide 66

Slide 66 text

Luigi “ A python framework for data flow definition and execution ”

Slide 67

Slide 67 text

Luigi •  Build complex pipelines of batch jobs •  Dependency resolution •  Parallelism •  Resume failed jobs •  Some support for Hadoop

Slide 68

Slide 68 text

Luigi

Slide 69

Slide 69 text

Luigi •  Dependency definition •  Hadoop / HDFS Integration •  Object oriented abstraction •  Parallelism •  Resume failed jobs •  Visualization of pipelines •  Command line integration

Slide 70

Slide 70 text

Minimal Bolerplate Code class WordCount(luigi.Task): date = luigi.DateParameter() def requires(self): return InputText(date) def output(self): return luigi.LocalTarget(’/tmp/%s' % self.date_interval) def run(self): count = {} for f in self.input(): for line in f.open('r'): for word in line.strip().split(): count[word] = count.get(word, 0) + 1 f = self.output().open('w') for word, count in six.iteritems(count): f.write("%s\t%d\n" % (word, count)) f.close()

Slide 71

Slide 71 text

class WordCount(luigi.Task): date = luigi.DateParameter() def requires(self): return InputText(date) def output(self): return luigi.LocalTarget(’/tmp/%s' % self.date_interval) def run(self): count = {} for f in self.input(): for line in f.open('r'): for word in line.strip().split(): count[word] = count.get(word, 0) + 1 f = self.output().open('w') for word, count in six.iteritems(count): f.write("%s\t%d\n" % (word, count)) f.close() Task Parameters

Slide 72

Slide 72 text

class WordCount(luigi.Task): date = luigi.DateParameter() def requires(self): return InputText(date) def output(self): return luigi.LocalTarget(’/tmp/%s' % self.date_interval) def run(self): count = {} for f in self.input(): for line in f.open('r'): for word in line.strip().split(): count[word] = count.get(word, 0) + 1 f = self.output().open('w') for word, count in six.iteritems(count): f.write("%s\t%d\n" % (word, count)) f.close() Programmatically Defined Dependencies

Slide 73

Slide 73 text

class WordCount(luigi.Task): date = luigi.DateParameter() def requires(self): return InputText(date) def output(self): return luigi.LocalTarget(’/tmp/%s' % self.date_interval) def run(self): count = {} for f in self.input(): for line in f.open('r'): for word in line.strip().split(): count[word] = count.get(word, 0) + 1 f = self.output().open('w') for word, count in six.iteritems(count): f.write("%s\t%d\n" % (word, count)) f.close() Each Task produces an ouput

Slide 74

Slide 74 text

class WordCount(luigi.Task): date = luigi.DateParameter() def requires(self): return InputText(date) def output(self): return luigi.LocalTarget(’/tmp/%s' % self.date_interval) def run(self): count = {} for f in self.input(): for line in f.open('r'): for word in line.strip().split(): count[word] = count.get(word, 0) + 1 f = self.output().open('w') for word, count in six.iteritems(count): f.write("%s\t%d\n" % (word, count)) f.close() Write Logic in Python

Slide 75

Slide 75 text

Hadoop

Slide 76

Slide 76 text

No content

Slide 77

Slide 77 text

https://www.flickr.com/photos/12914838@N00/15015146343/ Hadoop = Java?

Slide 78

Slide 78 text

Hadoop Streaming cat input.txt | ./map.py | sort | ./reduce.py > output.txt

Slide 79

Slide 79 text

Hadoop Streaming hadoop jar contrib/streaming/hadoop-*streaming*.jar \ -file /home/hduser/mapper.py -mapper /home/hduser/mapper.py \ -file /home/hduser/reducer.py -reducer /home/hduser/reducer.py \ -input /user/hduser/text.txt -output /user/hduser/gutenberg-output

Slide 80

Slide 80 text

class WordCount(luigi.hadoop.JobTask): date = luigi.DateParameter() def requires(self): return InputText(date) def output(self): return luigi.hdfs.HdfsTarget(’%s' % self.date_interval) def mapper(self, line): for word in line.strip().split(): yield word, 1 def reducer(self, key, values): yield key, sum(values) Luigi + Hadoop/HDFS

Slide 81

Slide 81 text

Go and learn:

Slide 82

Slide 82 text

Data Flow Visualization

Slide 83

Slide 83 text

Data Flow Visualization

Slide 84

Slide 84 text

Before •  Bash scripts + Cron •  Manual cleanup •  Manual failure recovery •  Hard(er) to debug

Slide 85

Slide 85 text

Now •  Complex nested Luigi jobs graphs •  Automatic retries •  Still Hard to debug

Slide 86

Slide 86 text

We use it for… •  Standalone executables •  Dump data from databases •  General Hadoop Streaming •  Bash Scripts / MRJob •  Pig* Scripts

Slide 87

Slide 87 text

You can wrap anything

Slide 88

Slide 88 text

Sample Application

Slide 89

Slide 89 text

Reviews are boring…

Slide 90

Slide 90 text

No content

Slide 91

Slide 91 text

No content

Slide 92

Slide 92 text

Source:  hGp://www.telegraph.co.uk/travel/hotels/11240430/TripAdvisor-­‐the-­‐funniest-­‐ reviews-­‐biggest-­‐controversies-­‐and-­‐best-­‐spoofs.html  

Slide 93

Slide 93 text

Reviews highlight the individuality and personality of users

Slide 94

Slide 94 text

Snippets from Reviews “Hips don’t lie” “Maid was banging” “Beautiful bowl flowers” “Irish dance, I love that” “No ghost sighting” “One ghost touching” “Too much cardio, not enough squats in the gym” “it is like hugging a bony super model”

Slide 95

Slide 95 text

Hotel Reviews + Gensim + Python + Luigi = ?

Slide 96

Slide 96 text

ExtractSentences LearnBigrams LearnModel ExtractClusterIds UploadEmbeddings Pig

Slide 97

Slide 97 text

No content

Slide 98

Slide 98 text

from gensim.models.doc2vec import Doc2Vec class LearnModelTask(luigi.Task): # Parameters.... blah blah blah def output(self): return luigi.LocalTarget(os.path.join(self.output_directory, self.model_out)) def requires(self): return LearnBigramsTask() def run(self): sentences = LabeledClusterIDSentence(self.input().path) model = Doc2Vec(sentences=sentences, size=int(self.size), dm=int(self.distmem), negative=int(self.negative), workers=int(self.workers), window=int(self.window), min_count=int(self.min_count), train_words=True) model.save(self.output().path)

Slide 99

Slide 99 text

Wor2vec/Doc2vec offer a similarity metric of words

Slide 100

Slide 100 text

Similarities are useful for non- personalized recommender systems

Slide 101

Slide 101 text

Non-personalized recommenders recommend items based on what other consumers have said about the items.

Slide 102

Slide 102 text

http://demo.trustyou.com

Slide 103

Slide 103 text

Takeaways

Slide 104

Slide 104 text

Takeaways •  It is possible to use Python as the primary language for doing large data processing on Hadoop. •  It is not a perfect setup but works well most of the time. •  Keep your ecosystem open to other technologies.

Slide 105

Slide 105 text

We are hiring [email protected]

Slide 106

Slide 106 text

We are hiring [email protected]

Slide 107

Slide 107 text

Questions?