Slide 1

Slide 1 text

@gamussa @hazelcast #oraclecode IN-MEMORY ANALYTICS with APACHE SPARK and HAZELCAST

Slide 2

Slide 2 text

@gamussa @hazelcast #oraclecode Solutions Architect Developer Advocate @gamussa in internetz Please, follow me on Twitter I’m very interesting © Who am I?

Slide 3

Slide 3 text

@gamussa @hazelcast #oraclecode What’s Apache Spark? Lightning-Fast Cluster Computing

Slide 4

Slide 4 text

@gamussa @hazelcast #oraclecode Run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.

Slide 5

Slide 5 text

@gamussa @hazelcast #oraclecode When to use Spark? Data Science Tasks when questions are unknown Data Processing Tasks when you have to much data You’re tired of Hadoop

Slide 6

Slide 6 text

@gamussa @hazelcast #oraclecode Spark Architecture

Slide 7

Slide 7 text

@gamussa @hazelcast #oraclecode

Slide 8

Slide 8 text

@gamussa @hazelcast #oraclecode RDD

Slide 9

Slide 9 text

@gamussa @hazelcast #oraclecode Resilient Distributed Datasets (RDD) are the primary abstraction in Spark – a fault-tolerant collection of elements that can be operated on in parallel

Slide 10

Slide 10 text

@gamussa @hazelcast #oraclecode

Slide 11

Slide 11 text

@gamussa @hazelcast #oraclecode RDD Operations

Slide 12

Slide 12 text

@gamussa @hazelcast #oraclecode operations on RDDs: transformations and actions

Slide 13

Slide 13 text

@gamussa @hazelcast #oraclecode transformations are lazy (not computed immediately) the transformed RDD gets recomputed when an action is run on it (default)

Slide 14

Slide 14 text

@gamussa @hazelcast #oraclecode RDD Transformations

Slide 15

Slide 15 text

@gamussa @hazelcast #oraclecode

Slide 16

Slide 16 text

@gamussa @hazelcast #oraclecode

Slide 17

Slide 17 text

@gamussa @hazelcast #oraclecode RDD Actions

Slide 18

Slide 18 text

@gamussa @hazelcast #oraclecode

Slide 19

Slide 19 text

@gamussa @hazelcast #oraclecode

Slide 20

Slide 20 text

@gamussa @hazelcast #oraclecode RDD Fault Tolerance

Slide 21

Slide 21 text

@gamussa @hazelcast #oraclecode

Slide 22

Slide 22 text

@gamussa @hazelcast #oraclecode RDD Construction

Slide 23

Slide 23 text

@gamussa @hazelcast #oraclecode parallelized collections take an existing Scala collection and run functions on it in parallel

Slide 24

Slide 24 text

@gamussa @hazelcast #oraclecode Hadoop datasets run functions on each record of a file in Hadoop distributed file system or any other storage system supported by Hadoop

Slide 25

Slide 25 text

@gamussa @hazelcast #oraclecode What’s Hazelcast IMDG? The Fastest In-memory Data Grid

Slide 26

Slide 26 text

@gamussa @hazelcast #oraclecode Hazelcast IMDG is an operational, in-memory, distributed computing platform that manages data using in-memory storage, and performs parallel execution for breakthrough application speed and scale

Slide 27

Slide 27 text

@gamussa @hazelcast #oraclecode High-Density Caching In-Memory Data Grid Web Session Clustering Microservices Infrastructure

Slide 28

Slide 28 text

@gamussa @hazelcast #oraclecode What’s Hazelcast IMDG? In-memory Data Grid Apache v2 Licensed Distributed Caches (IMap, JCache) Java Collections (IList, ISet, IQueue) Messaging (Topic, RingBuffer) Computation (ExecutorService, M-R)

Slide 29

Slide 29 text

@gamussa @hazelcast #oraclecode Green Primary Green Backup Green Shard

Slide 30

Slide 30 text

@gamussa @hazelcast #oraclecode

Slide 31

Slide 31 text

@gamussa @hazelcast #oraclecode final SparkConf sparkConf = new SparkConf() .set("hazelcast.server.addresses", "localhost") .set("hazelcast.server.groupName", "dev") .set("hazelcast.server.groupPass", "dev-pass") .set("hazelcast.spark.readBatchSize", "5000") .set("hazelcast.spark.writeBatchSize", "5000") .set("hazelcast.spark.valueBatchingEnabled", "true"); final JavaSparkContext jsc = new JavaSparkContext("spark://localhost:7077", "app", sparkConf); final HazelcastSparkContext hsc = new HazelcastSparkContext(jsc); final HazelcastJavaRDD mapRdd = hsc.fromHazelcastMap("movie"); final HazelcastJavaRDD cacheRdd = hsc.fromHazelcastCache("my- cache");

Slide 32

Slide 32 text

@gamussa @hazelcast #oraclecode final SparkConf sparkConf = new SparkConf() .set("hazelcast.server.addresses", "localhost") .set("hazelcast.server.groupName", "dev") .set("hazelcast.server.groupPass", "dev-pass") .set("hazelcast.spark.readBatchSize", "5000") .set("hazelcast.spark.writeBatchSize", "5000") .set("hazelcast.spark.valueBatchingEnabled", "true"); final JavaSparkContext jsc = new JavaSparkContext("spark://localhost:7077", "app", sparkConf); final HazelcastSparkContext hsc = new HazelcastSparkContext(jsc); final HazelcastJavaRDD mapRdd = hsc.fromHazelcastMap("movie"); final HazelcastJavaRDD cacheRdd = hsc.fromHazelcastCache("my- cache");

Slide 33

Slide 33 text

@gamussa @hazelcast #oraclecode final SparkConf sparkConf = new SparkConf() .set("hazelcast.server.addresses", "localhost") .set("hazelcast.server.groupName", "dev") .set("hazelcast.server.groupPass", "dev-pass") .set("hazelcast.spark.readBatchSize", "5000") .set("hazelcast.spark.writeBatchSize", "5000") .set("hazelcast.spark.valueBatchingEnabled", "true"); final JavaSparkContext jsc = new JavaSparkContext("spark://localhost:7077", "app", sparkConf); final HazelcastSparkContext hsc = new HazelcastSparkContext(jsc); final HazelcastJavaRDD mapRdd = hsc.fromHazelcastMap("movie"); final HazelcastJavaRDD cacheRdd = hsc.fromHazelcastCache("my- cache");

Slide 34

Slide 34 text

@gamussa @hazelcast #oraclecode final SparkConf sparkConf = new SparkConf() .set("hazelcast.server.addresses", "localhost") .set("hazelcast.server.groupName", "dev") .set("hazelcast.server.groupPass", "dev-pass") .set("hazelcast.spark.readBatchSize", "5000") .set("hazelcast.spark.writeBatchSize", "5000") .set("hazelcast.spark.valueBatchingEnabled", "true"); final JavaSparkContext jsc = new JavaSparkContext("spark://localhost:7077", "app", sparkConf); final HazelcastSparkContext hsc = new HazelcastSparkContext(jsc); final HazelcastJavaRDD mapRdd = hsc.fromHazelcastMap("movie"); final HazelcastJavaRDD cacheRdd = hsc.fromHazelcastCache("my- cache");

Slide 35

Slide 35 text

@gamussa @hazelcast #oraclecode Demo

Slide 36

Slide 36 text

@gamussa @hazelcast #oraclecode LIMITATIONS

Slide 37

Slide 37 text

@gamussa @hazelcast #oraclecode DATA SHOULD NOT BE UPDATED WHILE READING FROM SPARK

Slide 38

Slide 38 text

@gamussa @hazelcast #oraclecode WHY ?

Slide 39

Slide 39 text

@gamussa @hazelcast #oraclecode MAP EXPANSION SHUFFLES THE DATA INSIDE THE BUCKET

Slide 40

Slide 40 text

@gamussa @hazelcast #oraclecode CURSOR DOESN’T POINT TO CORRECT ENTRY ANYMORE, DUPLICATE OR MISSING ENTRIES COULD OCCUR

Slide 41

Slide 41 text

@gamussa @hazelcast #oraclecode github.com/hazelcast/hazelcast-spark

Slide 42

Slide 42 text

@gamussa @hazelcast #oraclecode THANKS! Any questions? You can find me at @gamussa [email protected]