Slide 1

Slide 1 text

Cascading through Hadoop Simpler mapreduce through data flows by Matthew McCullough, Ambient Ideas, LLC

Slide 2

Slide 2 text

Matthew McCullough

Slide 3

Slide 3 text

✓ Using Hadoop? Work with Big Data? Familiar with MapReduce? ✓ ✓ ? ? ?

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

http://delicious.com/matthew.mccullough/cascading

Slide 6

Slide 6 text

http://delicious.com/matthew.mccullough/hadoop

Slide 7

Slide 7 text

http://github.com/matthewmccullough/cascading-course

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

MapReduce

Slide 10

Slide 10 text

a quick review...

Slide 11

Slide 11 text

classical Map & Reduce

Slide 12

Slide 12 text

now MapReduce®

Slide 13

Slide 13 text

Raw Data Split Shuffle Processed Data Map Reduce

Slide 14

Slide 14 text

Hadoop Java API implementation...

Slide 15

Slide 15 text

Raw Data Split Shuffle Processed Data Map Reduce

Slide 16

Slide 16 text

// The WordCount Mapper public static class TokenizerMapper extends Mapper{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } }

Slide 17

Slide 17 text

Raw Data Split Shuffle Processed Data Map Reduce

Slide 18

Slide 18 text

// The WordCount Reducer public static class IntSumReducer extends Reducer { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } }

Slide 19

Slide 19 text

but wait...

Slide 20

Slide 20 text

// The WordCount main() public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: wordcount "); System.exit(2); } Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); }

Slide 21

Slide 21 text

and how about multiple files?

Slide 22

Slide 22 text

package org.apache.hadoop.examples; import java.io.BufferedReader; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import java.io.InputStreamReader; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.WritableComparable; import org.apache.hadoop.mapred.FileInputFormat; import org.apache.hadoop.mapred.FileOutputFormat; import org.apache.hadoop.mapred.InputSplit; import org.apache.hadoop.mapred.JobClient; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapred.MapReduceBase; import org.apache.hadoop.mapred.Mapper; import org.apache.hadoop.mapred.MultiFileInputFormat; import org.apache.hadoop.mapred.MultiFileSplit; import org.apache.hadoop.mapred.OutputCollector; import org.apache.hadoop.mapred.RecordReader;

Slide 23

Slide 23 text

//set the InputFormat of the job to our InputFormat job.setInputFormat(MyInputFormat.class); // the keys are words (strings) job.setOutputKeyClass(Text.class); // the values are counts (ints) job.setOutputValueClass(IntWritable.class); //use the defined mapper job.setMapperClass(MapClass.class); //use the WordCount Reducer job.setCombinerClass(LongSumReducer.class); job.setReducerClass(LongSumReducer.class); FileInputFormat.addInputPaths(job, args[0]); FileOutputFormat.setOutputPath(job, new Path(args[1])); JobClient.runJob(job); return 0; } public static void main(String[] args) throws Exception { int ret = ToolRunner.run(new MultiFileWordCount(), args); System.exit(ret); } }

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

// The WordCount main() public static void main(String[] arg

Slide 26

Slide 26 text

No content

Slide 27

Slide 27 text

Coding a Java Flow

Slide 28

Slide 28 text

public class SimplestPipe1Flip { public static void main(String[] args) { String inputPath = "data/babynamedefinitions.csv"; String outputPath = "output/simplestpipe1"; Scheme sourceScheme = new TextDelimited( new Fields( "name", "definition" ), "," ); Tap source = new Hfs( sourceScheme, inputPath ); Scheme sinkScheme = new TextDelimited( new Fields( "definition", "name" ), " ++ " ); Tap sink = new Hfs( sinkScheme, outputPath, SinkMode.REPLACE ); Pipe assembly = new Pipe( "flip" ); Properties properties = new Properties(); FlowConnector.setApplicationJarClass(properties, SimplestPipe1Flip.class); FlowConnector flowConnector = new FlowConnector( properties ); Flow flow = flowConnector.connect( "flipflow", source, sink, assembly ); flow.complete(); } }

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

The Author

Slide 31

Slide 31 text

Ignoring that Hadoop is as much about analytics as it is about integration leads to a fair number of compromises, including, but not exclusive to a loss in quality of life (in trade for a false sense of accomplishment) -Chris Wensel, Cascading Inventor

Slide 32

Slide 32 text

http://cascading.org

Slide 33

Slide 33 text

http://concurrentinc.com

Slide 34

Slide 34 text

citizen of the big data domain

Slide 35

Slide 35 text

proper level of abstraction for Hadoop

Slide 36

Slide 36 text

No content

Slide 37

Slide 37 text

Hadoop: 2011

Slide 38

Slide 38 text

who's using Hadoop?

Slide 39

Slide 39 text

-Meetup.com -AOL -Bing -Facebook -Netflix -Yahoo -Twitter

Slide 40

Slide 40 text

Hadoop is as much about analytics as it is about integration. Ignoring that leads to crazy complex tool chains that typically involve XML -Chris Wensel, Cascading Inventor

Slide 41

Slide 41 text

No content

Slide 42

Slide 42 text

✓Two humans ✓IBM cluster ✓Hadoop ✓Java Go!

Slide 43

Slide 43 text

No content

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

Hadoop DSLs

Slide 46

Slide 46 text

Pig approximates ETL

Slide 47

Slide 47 text

#Pig Script Person = LOAD 'people.csv' using PigStorage(','); Names = FOREACH Person GENERATE $2 AS name; OrderedNames = ORDER Names BY name ASC; GroupedNames = GROUP OrderedNames BY name; NameCount = FOREACH GroupedNames GENERATE group, COUNT(OrderedNames); store NameCount into 'names.out';

Slide 48

Slide 48 text

Hive approximates SQL

Slide 49

Slide 49 text

#Hive Script LOAD DATA INPATH “shakespeare_freq” INTO TABLE shakespeare; SELECT * FROM shakespeare WHERE freq > 100 SORT BY freq ASC LIMIT 10;

Slide 50

Slide 50 text

Cascading Groovy approximates MapR

Slide 51

Slide 51 text

//Cascading Groovy Script def cascading = new Cascading() def builder = cascading.builder(); Flow flow = builder.flow("wordcount") { source(input, scheme: text()) tokenize(/[.,]*\s+/) group() count() group(["count"], reverse: true) sink(output, delete: true) }

Slide 52

Slide 52 text

Cascalog approximates Datalog

Slide 53

Slide 53 text

#Cascalog Script (?<- (stdout) [?person] (age ?person 25))

Slide 54

Slide 54 text

No content

Slide 55

Slide 55 text

Here's another faux DSL for you!

Slide 56

Slide 56 text

Don't worry Martin. Cascading isn't a DSL. Really.

Slide 57

Slide 57 text

No content

Slide 58

Slide 58 text

The Metaphor

Slide 59

Slide 59 text

Divide & Conquer

Slide 60

Slide 60 text

with a different metaphor

Slide 61

Slide 61 text

Water

Slide 62

Slide 62 text

Pipes

Slide 63

Slide 63 text

Taps

Slide 64

Slide 64 text

Source

Slide 65

Slide 65 text

Sink

Slide 66

Slide 66 text

Flows

Slide 67

Slide 67 text

Planner

Slide 68

Slide 68 text

Planner to optimize parallelism

Slide 69

Slide 69 text

No content

Slide 70

Slide 70 text

Tuples

Slide 71

Slide 71 text

Tuples

Slide 72

Slide 72 text

ordered list of elements

Slide 73

Slide 73 text

["Matthew", 2, true]

Slide 74

Slide 74 text

Tuple Stream

Slide 75

Slide 75 text

["Matthew", 2, true], ["Jay", 2, true], ["Peter", 0, false]

Slide 76

Slide 76 text

["Matthew", "Red"], ["Jay", "Grey"], ["Peter", "Brown"] ["Matthew", 2, true], ["Jay", 2, true], ["Peter", 0, false] Co-Group

Slide 77

Slide 77 text

No content

Slide 78

Slide 78 text

The Process

Slide 79

Slide 79 text

Pipe Head Tail Source Sink Tap Tap

Slide 80

Slide 80 text

Pipe Head Tail Pipe Head Tail Pipe Head Tail Source Tap Sink Tap Flow

Slide 81

Slide 81 text

Late binding to taps

Slide 82

Slide 82 text

public class SimplestPipe1Flip { public static void main(String[] args) { String inputPath = "data/babynamedefinitions.csv"; String outputPath = "output/simplestpipe1"; Scheme sourceScheme = new TextDelimited( new Fields( "name", "definition" ), "," ); Tap source = new Hfs( sourceScheme, inputPath ); Scheme sinkScheme = new TextDelimited( new Fields( "definition", "name" ), " ++ " ); Tap sink = new Hfs( sinkScheme, outputPath, SinkMode.REPLACE ); Pipe assembly = new Pipe( "flip" ); Properties properties = new Properties(); FlowConnector.setApplicationJarClass(properties, SimplestPipe1Flip.class); FlowConnector flowConnector = new FlowConnector( properties ); Flow flow = flowConnector.connect( "flipflow", source, sink, assembly ); flow.complete(); } }

Slide 83

Slide 83 text

Pipe Types Each GroupBy CoGroup Every Sub-Assembly

Slide 84

Slide 84 text

GroupBy CoGroup Every Sub-Assembly Each CoGroup Flow

Slide 85

Slide 85 text

DAG

Slide 86

Slide 86 text

Cascade GroupBy CoGroup Every Sub-Assembly Each CoGroup GroupBy CoGroup Every Sub-Assembly Each CoGroup GroupBy CoGroup Every Sub-Assembly Each CoGroup

Slide 87

Slide 87 text

public class SimplestPipe3CoGroup { public static void main(String[] args) { String inputPathDefinitions = "data/babynamedefinitions.csv"; String inputPathCounts = "data/babynamecounts.csv"; String outputPath = "output/simplestpipe3"; Scheme sourceSchemeDefinitions = new TextDelimited( new Fields( "name", "definition" ), "," ); Scheme sourceSchemeCounts = new TextDelimited( new Fields( "name", "count" ), "," ); Tap sourceDefinitions = new Hfs( sourceSchemeDefinitions, inputPathDefinitions ); Tap sourceCounts = new Hfs( sourceSchemeCounts, inputPathCounts ); Scheme sinkScheme = new TextDelimited( new Fields( "dname", "count", "definition" ), " ^^^ " ); Tap sink = new Hfs( sinkScheme, outputPath, SinkMode.REPLACE ); Pipe definitionspipe = new Pipe( "definitionspipe" ); Pipe countpipe = new Pipe( "countpipe" ); //Join the tuple streams Fields commonfields = new Fields( "name" ); Fields newfields = new Fields("dname", "definition", "cname", "count"); Pipe joinpipe = new CoGroup( definitionspipe, commonfields, countpipe, commonfields, newfields, new InnerJoin() ); Properties properties = new Properties(); FlowConnector.setApplicationJarClass(properties, SimplestPipe3CoGroup.class); FlowConnector flowConnector = new FlowConnector( properties ); Map sources = new HashMap(); sources.put("definitionspipe", sourceDefinitions); sources.put("countpipe", sourceCounts); Flow flow = flowConnector.connect( sources, sink, joinpipe ); flow.complete(); } }

Slide 88

Slide 88 text

No content

Slide 89

Slide 89 text

Motivations

Slide 90

Slide 90 text

Big Data is a g r o w i n g field

Slide 91

Slide 91 text

MapReduce is the primary technique

Slide 92

Slide 92 text

is becoming the MR standard

Slide 93

Slide 93 text

Why a new MR toolkit? ㊌ Simpler coding ㊌ More logical processing abstractions ㊌ Run MapReduce locally ㊌ Debug jobs with ease

Slide 94

Slide 94 text

easy debugging...

Slide 95

Slide 95 text

public class SimplestPipe1Flip { public static void main(String[] args) { String inputPath = "data/babynamedefinitions.csv"; String outputPath = "output/simplestpipe1"; Scheme sourceScheme = new TextDelimited( new Fields( "name", "definition" ), "," ); Tap source = new Hfs( sourceScheme, inputPath ); Scheme sinkScheme = new TextDelimited( new Fields( "definition", "name" ), " ++ " ); Tap sink = new Hfs( sinkScheme, outputPath, SinkMode.REPLACE ); Pipe assembly = new Pipe( "flip" ); //OPTIONAL: Debug the tuple //assembly = new Each( assembly, DebugLevel.VERBOSE, new Debug() ); Properties properties = new Properties(); FlowConnector.setApplicationJarClass(properties, SimplestPipe1Flip.class); FlowConnector flowConnector = new FlowConnector( properties ); //OPTIONAL: Have the planner use or filter out the debugging statements //FlowConnector.setDebugLevel( properties, DebugLevel.VERBOSE ); Flow flow = flowConnector.connect( "flipflow", source, sink, assembly ); flow.complete(); } }

Slide 96

Slide 96 text

Cascading User Roles ㊌ Application executor ㊌ Process assembler ㊌ Operation developer

Slide 97

Slide 97 text

Hadoop is never used alone. The dirty secret is that it is really a huge ETL tool. -Chris Wensel, Cascading Inventor

Slide 98

Slide 98 text

50gal Hot Water Heater

Slide 99

Slide 99 text

Tankless Hot Water Heater

Slide 100

Slide 100 text

No content

Slide 101

Slide 101 text

Building

Slide 102

Slide 102 text

Let's prep the build

Slide 103

Slide 103 text

Why?

Slide 104

Slide 104 text

When in doubt, look at the Cascading source code. If something is not documented in this User Guide, the source code will give you clear instructions on what to do or expect. -Chris Wensel, Cascading Inventor

Slide 105

Slide 105 text

https://github.com/cwensel

Slide 106

Slide 106 text

No content

Slide 107

Slide 107 text

https://github.com/cwensel/cascading

Slide 108

Slide 108 text

Ant 1.8.x

Slide 109

Slide 109 text

Ivy 2.2.x

Slide 110

Slide 110 text

# Verified Ant > 1.8.x # Verified Ivy > 2.2.x $ ant retrieve

Slide 111

Slide 111 text

No content

Slide 112

Slide 112 text

Let's build it...

Slide 113

Slide 113 text

$ ls -al drwxr-xr-x 15 mccm06 staff 510B Feb 21 14:31 ./ drwxr-xr-x 20 mccm06 staff 680B Feb 17 15:39 ../ drwxr-xr-x 10 mccm06 staff 340B Feb 19 01:40 cascading.groovy_git/ drwxr-xr-x 7 mccm06 staff 238B Feb 19 01:40 cascading.hbase_git/ drwxr-xr-x 8 mccm06 staff 272B Feb 19 01:40 cascading.jdbc_git/ drwxr-xr-x 8 mccm06 staff 272B Feb 19 01:39 cascading.load_git/ drwxr-xr-x 9 mccm06 staff 306B Feb 19 01:39 cascading.memcached_git/ drwxr-xr-x 9 mccm06 staff 306B Feb 19 01:39 cascading.multitool_git/ drwxr-xr-x 10 mccm06 staff 340B Feb 19 01:39 cascading.samples_git/ drwxr-xr-x 8 mccm06 staff 272B Feb 19 01:39 cascading.work_git/ drwxr-xr-x 14 mccm06 staff 476B Feb 21 14:26 cascading_git/ drwxr-xr-x 11 mccm06 staff 374B Dec 31 16:16 cascalog_git/ lrwxr-xr-x 1 mccm06 staff 45B Feb 21 14:31 hadoop -> /Applications/Dev/hadoop-family/hadoop-0.20.1

Slide 114

Slide 114 text

# Trying Hadoop == 0.21.0 # Verified 'hadoop' is neighbor to cascading $ ant compile

Slide 115

Slide 115 text

[javac] cascading_git/src/core/cascading/tap/hadoop/TapIterator.java:52: cannot find symbol [javac] symbol : class JobConf [javac] location: class cascading.tap.hadoop.TapIterator [javac] private final JobConf conf; [javac] ^ [javac] cascading_git/src/core/cascading/tap/hadoop/TapIterator.java:54: cannot find symbol [javac] symbol : class InputSplit [javac] location: class cascading.tap.hadoop.TapIterator [javac] private InputSplit[] splits; [javac] ^ [javac] cascading_git/src/core/cascading/tap/hadoop/TapIterator.java:56: cannot find symbol [javac] symbol : class RecordReader [javac] location: class cascading.tap.hadoop.TapIterator [javac] private RecordReader reader; [javac] ^ [javac] cascading_git/src/core/cascading/tap/hadoop/TapIterator.java:75: cannot find symbol [javac] symbol : class JobConf [javac] location: class cascading.tap.hadoop.TapIterator [javac] public TapIterator( Tap tap, JobConf conf ) throws IOException [javac] ^ [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] Note: Some input files use unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. [javac] 100 errors

Slide 116

Slide 116 text

Hadoop 0.21.0

Slide 117

Slide 117 text

Argh!

Slide 118

Slide 118 text

Hadoop 0.20.1

Slide 119

Slide 119 text

# Verified Hadoop == 0.20.1 # Verified 'hadoop' is neighbor to cascading $ ant compile

Slide 120

Slide 120 text

Buildfile: cascading_git/build.xml init: [echo] initializing cascading environment... [mkdir] Created dir: cascading_git/build/core [mkdir] Created dir: cascading_git/build/xml [mkdir] Created dir: cascading_git/build/test [mkdir] Created dir: cascading_git/build/testresults echo-compile-buildnum: compile: [echo] building cascading... [javac] Compiling 238 source files to cascading_git/build/core [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] Note: Some input files use unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. [copy] Copying 1 file to cascading_git/build/core/cascading [javac] Compiling 5 source files to cascading_git/build/xml [javac] Compiling 85 source files to cascading_git/build/test [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] Note: Some input files use unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. [copy] Copying 24 files to cascading_git/build/test BUILD SUCCESSFUL Total time: 7 seconds

Slide 121

Slide 121 text

No content

Slide 122

Slide 122 text

Planner

Slide 123

Slide 123 text

planner diagrams

Slide 124

Slide 124 text

public class SimplestPipe2Sort { public static void main(String[] args) { String inputPath = "data/babynamedefinitions.csv"; String outputPath = "output/simplestpipe2"; Scheme sourceScheme = new TextDelimited( new Fields( "name", "definition" ), "," ); Tap source = new Hfs( sourceScheme, inputPath ); Scheme sinkScheme = new TextDelimited( new Fields( "definition", "name" ), " ^^^ " ); Tap sink = new Hfs( sinkScheme, outputPath, SinkMode.REPLACE ); Pipe assembly = new Pipe( "sortreverse" ); Fields groupFields = new Fields( "name"); //OPTIONAL: Set the comparator //groupFields.setComparator("name", Collections.reverseOrder()); assembly = new GroupBy( assembly, groupFields ); Properties properties = new Properties(); FlowConnector.setApplicationJarClass(properties, SimplestPipe2Sort.class); FlowConnector flowConnector = new FlowConnector( properties ); Flow flow = flowConnector.connect( "sortflow", source, sink, assembly ); flow.complete(); //OPTIONAL: Output a debugging diagram //flow.writeDOT(outputPath + "/flowdiagram.dot"); } }

Slide 125

Slide 125 text

No content

Slide 126

Slide 126 text

Abstraction Levels

Slide 127

Slide 127 text

a unique Java API

Slide 128

Slide 128 text

similar to command abstractions in the core JVM

Slide 129

Slide 129 text

CPU Instruction Assembly Language Class File Java Groovy DSL Hadoop Cascading Cascalog Cascading Groovy

Slide 130

Slide 130 text

No content

Slide 131

Slide 131 text

Builders

Slide 132

Slide 132 text

a unique Java API

Slide 133

Slide 133 text

but enhanced via...

Slide 134

Slide 134 text

Jython

Slide 135

Slide 135 text

JRuby

Slide 136

Slide 136 text

Clojure

Slide 137

Slide 137 text

Groovy

Slide 138

Slide 138 text

No content

Slide 139

Slide 139 text

Coding a Groovy Flow

Slide 140

Slide 140 text

Groovy

Slide 141

Slide 141 text

setup...

Slide 142

Slide 142 text

$ cd cascading.groovy $ ant dist $ cd dist $ groovy setup.groovy

Slide 143

Slide 143 text

coding...

Slide 144

Slide 144 text

def cascading = new Cascading() def builder = cascading.builder(); Flow flow = builder.flow("wordcount") { source(input, scheme: text()) // output new tuple for each split, //result replaces stream by default tokenize(/[.,]*\s+/) group() // group on stream // count values in group // creates 'count' field by default count() // group/sort on 'count', reverse the sort order group(["count"], reverse: true) sink(output, delete: true) }

Slide 145

Slide 145 text

execution...

Slide 146

Slide 146 text

$ groovy wordcount INFO - Concurrent, Inc - Cascading 1.2.1 [hadoop-0.19.2+] INFO - [wordcount] starting INFO - [wordcount] source: Hfs["TextLine[['line']->[ALL]]"]["output/fetched/fetch.txt"]"] INFO - [wordcount] sink: Hfs["TextLine[['line']->[ALL]]"]["output/counted"]"] INFO - [wordcount] parallel execution is enabled: false INFO - [wordcount] starting jobs: 2 INFO - [wordcount] allocating threads: 1 INFO - [wordcount] starting step: (1/2) TempHfs["SequenceFile[[0, 'count']]"][wordcount/18750/] INFO - [wordcount] starting step: (2/2) Hfs["TextLine[['line']->[ALL]]"]["output/counted"]"] INFO - deleting temp path output/counted/_temporary

Slide 147

Slide 147 text

No content

Slide 148

Slide 148 text

Cascalog

Slide 149

Slide 149 text

Clojure

Slide 150

Slide 150 text

No content

Slide 151

Slide 151 text

functional MR programming

Slide 152

Slide 152 text

No content

Slide 153

Slide 153 text

No content

Slide 154

Slide 154 text

㊌ Simple ㊌ Functions, filters, and aggregators all use the same syntax. Joins are implicit and natural.

Slide 155

Slide 155 text

㊌Expressive ㊌Logical composition is very powerful, and you can run arbitrary Clojure code in your query with little effort.

Slide 156

Slide 156 text

㊌Interactive ㊌Run queries from the Clojure REPL.

Slide 157

Slide 157 text

㊌Scalable ㊌Cascalog queries run as a series of MapReduce jobs.

Slide 158

Slide 158 text

㊌Query Anything ㊌Query HDFS data, database data, and/or local data by making use of Cascading’s “Tap” abstraction

Slide 159

Slide 159 text

influenced by Datalog

Slide 160

Slide 160 text

http://www.ccs.neu.edu/home/ramsdell/tools/datalog/datalog.html

Slide 161

Slide 161 text

No content

Slide 162

Slide 162 text

query planner

Slide 163

Slide 163 text

alternative to Pig, Hive

Slide 164

Slide 164 text

read or write any data source

Slide 165

Slide 165 text

higher density of code

Slide 166

Slide 166 text

(?<- (stdout) [?word ?count] (sentence ?s) (split ?s :> ?word) (c/ count ?count)) Tap Outputs Source

Slide 167

Slide 167 text

No content

Slide 168

Slide 168 text

Is It Fully Baked?

Slide 169

Slide 169 text

Java is 16 years old

Slide 170

Slide 170 text

No content

Slide 171

Slide 171 text

Hadoop is ~6 years old

Slide 172

Slide 172 text

No content

Slide 173

Slide 173 text

Cascading is 4 years old

Slide 174

Slide 174 text

No content

Slide 175

Slide 175 text

Cascading is MapReduce done right

Slide 176

Slide 176 text

No content

Slide 177

Slide 177 text

No content

Slide 178

Slide 178 text

Cascading through Hadoop Simpler mapreduce through data flows by Matthew McCullough, Ambient Ideas, LLC

Slide 179

Slide 179 text

No content