Slide 1

Slide 1 text

Processing Hotel Reviews with Python Miguel Cabrera @mfcabrera & Friends https://www.flickr.com/photos/18694857@N00/5614701858/

Slide 2

Slide 2 text

About Me

Slide 3

Slide 3 text

•  Colombian •  Neuberliner •  Work for TrustYou as Data (Scientist|Engineer|Juggler)™ •  Python around 2 years •  Founder and former organizer of Munich DataGeeks About Me

Slide 4

Slide 4 text

Agenda

Slide 5

Slide 5 text

•  Problem description •  Tools •  Sample Application Agenda

Slide 6

Slide 6 text

TrustYou

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

No content

Slide 10

Slide 10 text

•  Crawling •  Natural Language Processing / Semantic Analysis •  Record Linkage / Deduplication •  Ranking •  Recommendation •  Classification •  Clustering Tasks

Slide 11

Slide 11 text

Batch Layer •  Hadoop •  Python •  Pig* •  Java* Service Layer •  PostgreSQL •  MongoDB •  Redis •  Cassandra DATA DATA Hadoop Cluster Application Machines Stack

Slide 12

Slide 12 text

25 supported languages

Slide 13

Slide 13 text

500,000+ Properties

Slide 14

Slide 14 text

30,000,000+ daily crawled reviews

Slide 15

Slide 15 text

Deduplicated against 250,000,000+ reviews

Slide 16

Slide 16 text

200,000+ daily new reviews

Slide 17

Slide 17 text

https://www.flickr.com/photos/22646823@N08/2694765397/ Lots of text

Slide 18

Slide 18 text

Clean, Filter, Join and Aggregate

Slide 19

Slide 19 text

Crawl   Extract   Clean   Stats   ML   ML   NLP  

Slide 20

Slide 20 text

Steps in different technologies

Slide 21

Slide 21 text

Steps can be run in parallel

Slide 22

Slide 22 text

Steps have complex dependencies among them

Slide 23

Slide 23 text

•  Technology •  Parallel / Scale •  Dependency management / Orchestration Requirements

Slide 24

Slide 24 text

Technology

Slide 25

Slide 25 text

•  Numpy •  NLTK •  Scikit-Learn •  Pandas •  IPython / Jupyter Python

Slide 26

Slide 26 text

Scaling

Slide 27

Slide 27 text

Hadoop  

Slide 28

Slide 28 text

https://www.flickr.com/photos/12914838@N00/15015146343/ Hadoop = Java?

Slide 29

Slide 29 text

•  Hadoop Streaming •  MRJob •  Oozie •  Luigi •  … Python + Hadoop

Slide 30

Slide 30 text

Hadoop Streaming cat input.txt | ./map.py | sort | ./reduce.py > output.txt

Slide 31

Slide 31 text

Hadoop Streaming hadoop jar contrib/streaming/hadoop-*streaming*.jar \ -file /home/hduser/mapper.py -mapper /home/hduser/mapper.py \ -file /home/hduser/reducer.py -reducer /home/hduser/reducer.py \ -input /user/hduser/text.txt -output /user/hduser/gutenberg-output

Slide 32

Slide 32 text

Who likes to write Bash scripts?

Slide 33

Slide 33 text

Orchestrate

Slide 34

Slide 34 text

Luigi “ A python framework for data flow definition and execution ”

Slide 35

Slide 35 text

Luigi •  Dependency definition •  Hadoop / HDFS Integration •  Object oriented abstraction •  Parallelism •  Resume failed jobs •  Visualization of pipelines •  Command line integration

Slide 36

Slide 36 text

Minimal Bolerplate Code class WordCount(luigi.Task): date = luigi.DateParameter() def requires(self): return InputText(date) def output(self): return luigi.LocalTarget(’/tmp/%s' % self.date_interval) def run(self): count = {} for f in self.input(): for line in f.open('r'): for word in line.strip().split(): count[word] = count.get(word, 0) + 1 f = self.output().open('w') for word, count in six.iteritems(count): f.write("%s\t%d\n" % (word, count)) f.close()

Slide 37

Slide 37 text

class WordCount(luigi.Task): date = luigi.DateParameter() def requires(self): return InputText(date) def output(self): return luigi.LocalTarget(’/tmp/%s' % self.date_interval) def run(self): count = {} for f in self.input(): for line in f.open('r'): for word in line.strip().split(): count[word] = count.get(word, 0) + 1 f = self.output().open('w') for word, count in six.iteritems(count): f.write("%s\t%d\n" % (word, count)) f.close() Task Parameters

Slide 38

Slide 38 text

class WordCount(luigi.Task): date = luigi.DateParameter() def requires(self): return InputText(date) def output(self): return luigi.LocalTarget(’/tmp/%s' % self.date_interval) def run(self): count = {} for f in self.input(): for line in f.open('r'): for word in line.strip().split(): count[word] = count.get(word, 0) + 1 f = self.output().open('w') for word, count in six.iteritems(count): f.write("%s\t%d\n" % (word, count)) f.close() Programmatically Defined Dependencies

Slide 39

Slide 39 text

class WordCount(luigi.Task): date = luigi.DateParameter() def requires(self): return InputText(date) def output(self): return luigi.LocalTarget(’/tmp/%s' % self.date_interval) def run(self): count = {} for f in self.input(): for line in f.open('r'): for word in line.strip().split(): count[word] = count.get(word, 0) + 1 f = self.output().open('w') for word, count in six.iteritems(count): f.write("%s\t%d\n" % (word, count)) f.close() Each Task produces an ouput

Slide 40

Slide 40 text

class WordCount(luigi.Task): date = luigi.DateParameter() def requires(self): return InputText(date) def output(self): return luigi.LocalTarget(’/tmp/%s' % self.date_interval) def run(self): count = {} for f in self.input(): for line in f.open('r'): for word in line.strip().split(): count[word] = count.get(word, 0) + 1 f = self.output().open('w') for word, count in six.iteritems(count): f.write("%s\t%d\n" % (word, count)) f.close() Write Logic in Python

Slide 41

Slide 41 text

class WordCount(luigi.hadoop.JobTask): date = luigi.DateParameter() def requires(self): return InputText(date) def output(self): return luigi.hdfs.HdfsTarget(’%s' % self.date_interval) def mapper(self, line): for word in line.strip().split(): yield word, 1 def reducer(self, key, values): yield key, sum(values) Luigi + Hadoop/HDFS

Slide 42

Slide 42 text

Data Flow Visualization

Slide 43

Slide 43 text

Luigi •  Minimal bolierplate code •  Programmatically define dependencies •  Integration with HDFS / Hadoop •  Task Syncronization •  Can wrap anything

Slide 44

Slide 44 text

Before •  Bash scripts + Cron •  Manual cleanup •  Manual failure recovery •  Hard(er) to debug

Slide 45

Slide 45 text

Now •  Complex nested Luigi jobs graphs •  Automatic retries •  Still Hard to debug

Slide 46

Slide 46 text

We use it for… •  Standalone executables •  Dump data from databases •  General Hadoop Streaming •  Bash Scripts / MRJob •  Pig* Scripts

Slide 47

Slide 47 text

You can wrap anything

Slide 48

Slide 48 text

You can wrap anything Pig

Slide 49

Slide 49 text

The right tool for the right job

Slide 50

Slide 50 text

Pig is a highlevel platform for creating MapReduce programs with Hadoop

Slide 51

Slide 51 text

SQL SELECT f3, SUM(f2), AVG(f1) FROM relation WHERE f1 > 500 GROUP BY f3 rel = LOAD 'relation' AS (f1: int, f2: int, f3: chararray); rel = FILTER rel f1 > 500 by_f3 = GROUP rel BY f3; result = FOREACH by_f3 GENERATE group, SUM(by_f3.f2), AVG(by_f3.f1) Pig Latin Python def map(r): if r['f1'] > 500: yield r['f3'], [r['f1'], r['f2']] def reduce(k, values): avg = 0 summ = 0 l = len(values) for r in values: summ += r[1] avg += r[0] avg = avg/float(l) yield k, [summ, avg]

Slide 52

Slide 52 text

Pig + Python •  Data loading and transformation in Pig •  Other logic in Python •  Pig as a Luigi Task •  Pig UDFs defined in Python

Slide 53

Slide 53 text

Sample Application

Slide 54

Slide 54 text

Reviews are boring…

Slide 55

Slide 55 text

No content

Slide 56

Slide 56 text

No content

Slide 57

Slide 57 text

Source:  h7p://www.telegraph.co.uk/travel/hotels/11240430/TripAdvisor-­‐the-­‐funniest-­‐ reviews-­‐biggest-­‐controversies-­‐and-­‐best-­‐spoofs.html  

Slide 58

Slide 58 text

Reviews highlight the individuality and personality of users

Slide 59

Slide 59 text

Snippets from Reviews “Hips don’t lie” “Maid was banging” “Beautiful bowl flowers” “Irish dance, I love that” “No ghost sighting” “One ghost touching” “Too much cardio, not enough squats in the gym” “it is like hugging a bony super model”

Slide 60

Slide 60 text

Word2Vec

Slide 61

Slide 61 text

Group of algorithms

Slide 62

Slide 62 text

An instance of shallow learning

Slide 63

Slide 63 text

Feature learning model

Slide 64

Slide 64 text

Generates real-valued vectors represenation of words

Slide 65

Slide 65 text

“king” – “man” + “woman” = “queen”

Slide 66

Slide 66 text

Word2Vec Source:  h*p://technology.s4tchfix.com/blog/2015/03/11/word-­‐is-­‐worth-­‐a-­‐thousand-­‐vectors/  

Slide 67

Slide 67 text

Word2Vec Source:  h*p://technology.s4tchfix.com/blog/2015/03/11/word-­‐is-­‐worth-­‐a-­‐thousand-­‐vectors/  

Slide 68

Slide 68 text

Word2Vec Source:  h*p://technology.s4tchfix.com/blog/2015/03/11/word-­‐is-­‐worth-­‐a-­‐thousand-­‐vectors/  

Slide 69

Slide 69 text

Word2Vec Source:  h*p://technology.s4tchfix.com/blog/2015/03/11/word-­‐is-­‐worth-­‐a-­‐thousand-­‐vectors/  

Slide 70

Slide 70 text

Word2Vec Source:  h*p://technology.s4tchfix.com/blog/2015/03/11/word-­‐is-­‐worth-­‐a-­‐thousand-­‐vectors/  

Slide 71

Slide 71 text

Word2Vec Source:  h*p://technology.s4tchfix.com/blog/2015/03/11/word-­‐is-­‐worth-­‐a-­‐thousand-­‐vectors/  

Slide 72

Slide 72 text

Similar words are nearby vectors

Slide 73

Slide 73 text

Wor2vec offer a similarity metric of words

Slide 74

Slide 74 text

Can be extended to paragraphs and documents

Slide 75

Slide 75 text

A fast Python based implementation available via Gensim

Slide 76

Slide 76 text

Hotel Reviews + Gensim + Python + Luigi = ?

Slide 77

Slide 77 text

ExtractSentences LearnBigrams LearnModel ExtractClusterIds UploadEmbeddings Pig

Slide 78

Slide 78 text

No content

Slide 79

Slide 79 text

from gensim.models.doc2vec import Doc2Vec class LearnModelTask(luigi.Task): # Parameters.... blah blah blah def output(self): return luigi.LocalTarget(os.path.join(self.output_directory, self.model_out)) def requires(self): return LearnBigramsTask() def run(self): sentences = LabeledClusterIDSentence(self.input().path) model = Doc2Vec(sentences=sentences, size=int(self.size), dm=int(self.distmem), negative=int(self.negative), workers=int(self.workers), window=int(self.window), min_count=int(self.min_count), train_words=True) model.save(self.output().path)

Slide 80

Slide 80 text

Wor2vec/Doc2vec offer a similarity metric of words

Slide 81

Slide 81 text

Similarities are useful for non- personalized recommender systems

Slide 82

Slide 82 text

Non-personalized recommenders recommend items based on what other consumers have said about the items.

Slide 83

Slide 83 text

http://demo.trustyou.com

Slide 84

Slide 84 text

Takeaways

Slide 85

Slide 85 text

Takeaways •  It is possible to use Python as the primary language for doing large data processing on Hadoop. •  It is not a perfect setup but works well most of the time. •  Keep your ecosystem open to other technologies. •  Products reviews contain much more information than just facts.

Slide 86

Slide 86 text

Questions?