Slide 1

Slide 1 text

HELLO WORLD! IN HADOOP Chandra Yarlagadda randommuses chandrayarlagadda twitter.com/ linkedin.com/in/ Email : [email protected]

Slide 2

Slide 2 text

Titanium Sponsors Platinum Sponsors Gold Sponsors

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

Location / People / Events SMB Analytics Infrastructure Analytics Applications Data Sources Storage Security Crowdsourcing Hadoop Related NoSQL Databases Data Visualization Cluster Services Personal Data MPP Databases Industry Applications Social Media Sentiment Analysis Analytics Solutions Crowdsourced Analytics IT Analytics Data Sources Data Marketplaces Publisher Tools Marketing Management / Monitoring Real-Time Ad Optimization Statistical Computing Cross Infrastructure / Analytics Application Service Providers Big Data Search Analytics Services Collection / Transport Open Source Projects Framework Query / Data Flow Data Access Coordination / Workflow Machine Learning Real - Time Statistical Tools Cloud Deployment NewSQL Databases Source : www.bigdatalandscape.com

Slide 6

Slide 6 text

Location / People / Events SMB Analytics Infrastructure Analytics Applications Data Sources Storage Security Crowdsourcing Hadoop Related NoSQL Databases Data Visualization Cluster Services Personal Data MPP Databases Industry Applications Social Media Sentiment Analysis Analytics Solutions Crowdsourced Analytics IT Analytics Data Sources Data Marketplaces Publisher Tools Marketing Management / Monitoring Real-Time Ad Optimization Statistical Computing Cross Infrastructure / Analytics Application Service Providers Big Data Search Analytics Services Collection / Transport Open Source Projects Framework Query / Data Flow Data Access Coordination / Workflow Machine Learning Real - Time Statistical Tools Cloud Deployment NewSQL Databases Source : www.bigdatalandscape.com

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

HDFS

Slide 10

Slide 10 text

HDFS MAP REDUCE

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

What is Big Data?

Slide 13

Slide 13 text

What is Big Data?

Slide 14

Slide 14 text

What is Big Data? Volume Variety Velocity

Slide 15

Slide 15 text

No content

Slide 16

Slide 16 text

Before We Jump In!

Slide 17

Slide 17 text

Before We Jump In! • Hadoop 1.2.1

Slide 18

Slide 18 text

Before We Jump In! • Hadoop 1.2.1 • Daemon

Slide 19

Slide 19 text

Before We Jump In! • Hadoop 1.2.1 • Daemon • Pseudo - Distributed Mode

Slide 20

Slide 20 text

Before We Jump In! • Hadoop 1.2.1 • Daemon • Pseudo - Distributed Mode • Cloudera VM

Slide 21

Slide 21 text

Before We Jump In! • Hadoop 1.2.1 • Daemon • Pseudo - Distributed Mode • Cloudera VM • Do Not Start the Cloudera Manager

Slide 22

Slide 22 text

Before We Jump In! • Hadoop 1.2.1 • Daemon • Pseudo - Distributed Mode • Cloudera VM • Do Not Start the Cloudera Manager • Word Count

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

HDFS

Slide 25

Slide 25 text

HDFS • Hadoop Distributed File System

Slide 26

Slide 26 text

HDFS • Hadoop Distributed File System • Designed for Storing Very large files with

Slide 27

Slide 27 text

HDFS • Hadoop Distributed File System • Designed for Storing Very large files with • Streaming data access

Slide 28

Slide 28 text

HDFS • Hadoop Distributed File System • Designed for Storing Very large files with • Streaming data access • running on clusters of Commodity hardware

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

`

Slide 31

Slide 31 text

` Data Nodes

Slide 32

Slide 32 text

` Data Nodes

Slide 33

Slide 33 text

` Name Node Data Nodes

Slide 34

Slide 34 text

` Name Node Data Nodes

Slide 35

Slide 35 text

` Name Node Block1 Block2 Block3 Data Nodes

Slide 36

Slide 36 text

` Name Node Block1 Block2 Block3 Block1 Block2 Block3 Data Nodes

Slide 37

Slide 37 text

` Name Node Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes

Slide 38

Slide 38 text

` Name Node Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes

Slide 39

Slide 39 text

` Name Node Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 40

Slide 40 text

` Name Node fsImage Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 41

Slide 41 text

` Name Node fsImage Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 42

Slide 42 text

` Name Node Edit Log fsImage Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 43

Slide 43 text

` Name Node Edit Log fsImage Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 44

Slide 44 text

` Name Node Edit Log fsImage Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 45

Slide 45 text

` Name Node Secondary Name Node Edit Log fsImage Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 46

Slide 46 text

` Name Node Secondary Name Node Edit Log fsImage Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 47

Slide 47 text

` Name Node Secondary Name Node Edit Log fsImage Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 48

Slide 48 text

` Secondary Name Node Edit Log fsImage Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 49

Slide 49 text

` Secondary Name Node Edit Log fsImage Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 50

Slide 50 text

` Secondary Name Node Edit Log fsImage Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 51

Slide 51 text

` Secondary Name Node Edit Log fsImage Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 52

Slide 52 text

` Secondary Name Node Edit Log fsImage Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 53

Slide 53 text

` Secondary Name Node Edit Log fsImage New Name Node Block1 Block2 Block3 Block1 Block1 Block2 Block2 Block3 Block3 Data Nodes Block1

Slide 54

Slide 54 text

No content

Slide 55

Slide 55 text

HDFS - ATTRIBUTES

Slide 56

Slide 56 text

HDFS - ATTRIBUTES • Distributed

Slide 57

Slide 57 text

HDFS - ATTRIBUTES • Distributed • Write Once - Read Many Times Pattern

Slide 58

Slide 58 text

HDFS - ATTRIBUTES • Distributed • Write Once - Read Many Times Pattern • High Throughput - High Latency

Slide 59

Slide 59 text

HDFS - ATTRIBUTES • Distributed • Write Once - Read Many Times Pattern • High Throughput - High Latency • CAP ( Consitency - Availability - Parition Tolerance)

Slide 60

Slide 60 text

HDFS - ATTRIBUTES • Distributed • Write Once - Read Many Times Pattern • High Throughput - High Latency • CAP ( Consitency - Availability - Parition Tolerance) • Not Suitable for a lot of Small Files

Slide 61

Slide 61 text

HDFS - ATTRIBUTES • Distributed • Write Once - Read Many Times Pattern • High Throughput - High Latency • CAP ( Consitency - Availability - Parition Tolerance) • Not Suitable for a lot of Small Files • High Network Utilization and Disk I/O

Slide 62

Slide 62 text

No content

Slide 63

Slide 63 text

MAP REDUCE

Slide 64

Slide 64 text

MAP REDUCE It is a programming model and an associated implementation for

Slide 65

Slide 65 text

MAP REDUCE It is a programming model and an associated implementation for • Processing and Generating large data sets with a

Slide 66

Slide 66 text

MAP REDUCE It is a programming model and an associated implementation for • Processing and Generating large data sets with a • Parallel,

Slide 67

Slide 67 text

MAP REDUCE It is a programming model and an associated implementation for • Processing and Generating large data sets with a • Parallel, • Distributed Algorithm on a

Slide 68

Slide 68 text

MAP REDUCE It is a programming model and an associated implementation for • Processing and Generating large data sets with a • Parallel, • Distributed Algorithm on a • Cluster ( HDFS)

Slide 69

Slide 69 text

No content

Slide 70

Slide 70 text

MAP REDUCE - KEY POINTS

Slide 71

Slide 71 text

MAP REDUCE - KEY POINTS • Map and Reduce Phase

Slide 72

Slide 72 text

MAP REDUCE - KEY POINTS • Map and Reduce Phase • User defines the Map and Reduce Functions

Slide 73

Slide 73 text

MAP REDUCE - KEY POINTS • Map and Reduce Phase • User defines the Map and Reduce Functions • Both Phases have a set of Key - Value Pairs

Slide 74

Slide 74 text

No content

Slide 75

Slide 75 text

MAP REDUCE - SINGLE REDUCER

Slide 76

Slide 76 text

MAP REDUCE - SINGLE REDUCER

Slide 77

Slide 77 text

MAP REDUCE - SINGLE REDUCER Mapper Mapper Mapper Reducer

Slide 78

Slide 78 text

MAP REDUCE - SINGLE REDUCER Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 Reducer

Slide 79

Slide 79 text

MAP REDUCE - SINGLE REDUCER Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 Reducer

Slide 80

Slide 80 text

MAP REDUCE - SINGLE REDUCER Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 Reducer

Slide 81

Slide 81 text

MAP REDUCE - SINGLE REDUCER Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 Reducer

Slide 82

Slide 82 text

MAP REDUCE - SINGLE REDUCER Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 Reducer Name Node

Slide 83

Slide 83 text

MAP REDUCE - SINGLE REDUCER Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 Reducer HDFS Name Node

Slide 84

Slide 84 text

MAP REDUCE - SINGLE REDUCER Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 Reducer HDFS Job Tracker Task Tracker Name Node

Slide 85

Slide 85 text

MAP REDUCE - SINGLE REDUCER Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 Reducer HDFS Job Tracker Task Tracker Name Node

Slide 86

Slide 86 text

MAP REDUCE - MULTIPLE REDUCERS Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 Reducer HDFS Job Tracker Task Tracker Name Node Reducer

Slide 87

Slide 87 text

MAP REDUCE - MULTIPLE REDUCERS Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 Reducer HDFS Job Tracker Task Tracker Name Node Reducer

Slide 88

Slide 88 text

MAP REDUCE - MULTIPLE REDUCERS Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 Reducer HDFS Job Tracker Task Tracker Name Node Reducer Sort and Shuffle

Slide 89

Slide 89 text

MAP REDUCE - MULTIPLE REDUCERS Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 Reducer HDFS Job Tracker Task Tracker Name Node Reducer Sort and Shuffle

Slide 90

Slide 90 text

MAP REDUCE - MULTIPLE REDUCERS Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 Reducer HDFS Job Tracker Task Tracker Name Node Reducer Sort and Shuffle

Slide 91

Slide 91 text

MAP REDUCE - ZERO REDUCERS Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 HDFS Job Tracker Name Node

Slide 92

Slide 92 text

MAP REDUCE - ZERO REDUCERS Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 HDFS Job Tracker Name Node

Slide 93

Slide 93 text

MAP REDUCE - ZERO REDUCERS Mapper Mapper Mapper Input Split 1 Input Split 2 Input Split 3 HDFS Job Tracker Task Tracker Name Node

Slide 94

Slide 94 text

No content

Slide 95

Slide 95 text

Key - Value

Slide 96

Slide 96 text

Key - Value map ( InputKey, InputValue) => Set of (IntermediateKey,IntermediateValue)

Slide 97

Slide 97 text

Key - Value 0 Twinkle Twinkle Little Star Little 1 Star 1 Twinkle 1,1 => map ( InputKey, InputValue) => Set of (IntermediateKey,IntermediateValue)

Slide 98

Slide 98 text

Key - Value 0 Twinkle Twinkle Little Star Little 1 Star 1 Twinkle 1,1 => map ( InputKey, InputValue) => Set of (IntermediateKey,IntermediateValue) reduce(IntermediateKeys,IntermediateValues) => Set of (OutputKey,OutputValue)

Slide 99

Slide 99 text

Key - Value 0 Twinkle Twinkle Little Star Little 1 Star 1 Twinkle 1,1 => Little 1 Star 1 Twinkle 1,1 => Little 1 Star 1 Twinkle 2 map ( InputKey, InputValue) => Set of (IntermediateKey,IntermediateValue) reduce(IntermediateKeys,IntermediateValues) => Set of (OutputKey,OutputValue)

Slide 100

Slide 100 text

No content

Slide 101

Slide 101 text

MAP CODE

Slide 102

Slide 102 text

MAP CODE map ( InputKey, InputValue) => Set of (IntermediateKey,IntermediateValue)

Slide 103

Slide 103 text

MAP CODE public static class TokenizerMapper extends Mapper{ map ( InputKey, InputValue) => Set of (IntermediateKey,IntermediateValue)

Slide 104

Slide 104 text

MAP CODE public static class extends map ( InputKey, InputValue) => Set of (IntermediateKey,IntermediateValue) private Text word = new Text(); private final static IntWritable one = new IntWritable(1);

Slide 105

Slide 105 text

MAP CODE public static class extends map ( InputKey, InputValue) => Set of (IntermediateKey,IntermediateValue) public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); private private final static

Slide 106

Slide 106 text

MAP CODE public static class extends map ( InputKey, InputValue) => Set of (IntermediateKey,IntermediateValue) public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); private private final static 0 Twinkle Twinkle Little Star

Slide 107

Slide 107 text

MAP CODE public static class extends map ( InputKey, InputValue) => Set of (IntermediateKey,IntermediateValue) public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); private private final static 0 Twinkle Twinkle Little Star Little 1 Star 1 Twinkle 1,1

Slide 108

Slide 108 text

MAP CODE public static class TokenizerMapper extends Mapper{ map ( InputKey, InputValue) => Set of (IntermediateKey,IntermediateValue) public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); private Text word = new Text(); private final static IntWritable one = new IntWritable(1); } } }

Slide 109

Slide 109 text

No content

Slide 110

Slide 110 text

REDUCE CODE

Slide 111

Slide 111 text

REDUCE CODE reduce(IntermediateKeys,IntermediateValues) => Set of (OutputKey,OutputValue)

Slide 112

Slide 112 text

REDUCE CODE public static class IntSumReducer extends Reducer{ reduce(IntermediateKeys,IntermediateValues) => Set of (OutputKey,OutputValue)

Slide 113

Slide 113 text

REDUCE CODE public static class extends reduce(IntermediateKeys,IntermediateValues) => Set of (OutputKey,OutputValue) private IntWritable result = new IntWritable();

Slide 114

Slide 114 text

REDUCE CODE public static class extends reduce(IntermediateKeys,IntermediateValues) => Set of (OutputKey,OutputValue) public void reduce(Text key, Iterable values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val:values) { sum += val.get(); } result.set(sum); context.write(key, result); private

Slide 115

Slide 115 text

REDUCE CODE public static class extends reduce(IntermediateKeys,IntermediateValues) => Set of (OutputKey,OutputValue) public void reduce(Text key, Iterable values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val:values) { sum += val.get(); } result.set(sum); context.write(key, result); private Little 1 Star 1 Twinkle 1,1 Little 1 Star 1 Twinkle 2

Slide 116

Slide 116 text

REDUCE CODE public static class IntSumReducer extends Reducer{ reduce(IntermediateKeys,IntermediateValues) => Set of (OutputKey,OutputValue) public void reduce(Text key, Iterable values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val:values) { sum += val.get(); } result.set(sum); context.write(key, result); private IntWritable result = new IntWritable(); } }

Slide 117

Slide 117 text

MAIN() public static void main(String[] args) throws Exception { Job job = Job.getInstance(conf, "word count"); Configuration conf = new Configuration(); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } job.setInputFormatClass(TextInputFormat.class);

Slide 118

Slide 118 text

No content

Slide 119

Slide 119 text

DEMO TIME

Slide 120

Slide 120 text

No content

Slide 121

Slide 121 text

RESOUCES

Slide 122

Slide 122 text

RESOUCES • https://hadoop.apache.org/

Slide 123

Slide 123 text

RESOUCES • https://hadoop.apache.org/ • Hadoop the Definitive Guide - Tom White

Slide 124

Slide 124 text

RESOUCES • https://hadoop.apache.org/ • Hadoop the Definitive Guide - Tom White • http://hortonworks.com/tutorials/

Slide 125

Slide 125 text

RESOUCES • https://hadoop.apache.org/ • Hadoop the Definitive Guide - Tom White • http://hortonworks.com/tutorials/ • http://www.cloudera.com/content/cloudera/en/documentation/core/ v5-3-x/topics/cloudera_quickstart_vm.html

Slide 126

Slide 126 text

No content

Slide 127

Slide 127 text

Questions?