Slide 1

Slide 1 text

1 Anomaly  Detec-on  with   Apache  Spark:  Workshop   Sean  Owen  /  Director  of  Data  Science  /  Cloudera  

Slide 2

Slide 2 text

2   www.flickr.com/photos/sammyjammy/1285612321/in/set-­‐72157620597747933   ✗  

Slide 3

Slide 3 text

3   ✔  

Slide 4

Slide 4 text

Anomaly  Detec-on   4   •  Anomalous…   •  Server  metrics   •  Access  paNerns   •  Transac-ons   •  Labeled,  or  not   •  Some-mes  know   examples  of  “unusual”   •  Some-mes  not   •  Applica-ons   •  Network  security   •  IT  monitoring   •  Fraud  detec-on   •  Error  detec-on  

Slide 5

Slide 5 text

Clustering   5   •  Find  areas  of  dense  data   •  Unusual  =     far  from  any  cluster   •  What  is  “far”?   •  Unsupervised  learning   •  Supervise  with  labels  to   improve,  interpret     en.wikipedia.org/wiki/Cluster_analysis  

Slide 6

Slide 6 text

k-­‐means++  clustering   6   •  Simple,  well-­‐known,   parallel   •  Assign  points,  update   centers,  repeat   •  Goal:  points  close  to   nearest  cluster  center   •  Must  choose  k  =     number  of  clusters   mahout.apache.org/users/clustering/fuzzy-­‐k-­‐means.html  

Slide 7

Slide 7 text

7 KDD  Cup  ’99  Data  Set  

Slide 8

Slide 8 text

KDD  Cup  1999   8   •  Annual  ML  compe--on   www.sigkdd.org/kddcup/ index.php! •  1999:  Network  intrusion   detec-on   •  4.9M  connec-ons   •  Most  normal,  some   known  aNacks   •  Not  a  realis+c  sample!  

Slide 9

Slide 9 text

9 0,tcp,http,SF,215,45076,
 0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,1,
 0.00,0.00,0.00,0.00,1.00,0.00,0.00,
 0,0,0.00,0.00,0.00,0.00,0.00,0.00,
 0.00,0.00,normal.! Label   Service   Bytes  Received   %  SYN  errors  

Slide 10

Slide 10 text

Apache  Spark:  Something  For  Everyone   10   •  Scala-­‐based   •  Expressive,  efficient   •  JVM-­‐based   •  Consistent  Scala-­‐like  API   •  RDD  works  like  collec-on   •  RDDs  for  everything   •  Like  Apache  Crunch  is   Collec-on-­‐like   •  Distributed   •  Hadoop-­‐friendly   •  Integrate  with  where   data,  cluster  already  is   •  ETL  no  longer  separate   •  Interac-ve  REPL   •  MLlib  

Slide 11

Slide 11 text

11 Clustering,  Take  #0  

Slide 12

Slide 12 text

12 val rawData = sc.textFile("/user/srowen/kddcup.data", 120)! rawData: org.apache.spark.rdd.RDD[String] =! MappedRDD[13] at textFile at :15! ! rawData.count! ...! res1: Long = 4898431! ! rawData.take(1)! ...! res3: Array[String] = Array(0,tcp,http,SF, 215,45076,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,1,0.00,0.00,0. 00,0.00,1.00,0.00,0.00,0,0,0.00,0.00,0.00,0.00,0.00,0.00,0 .00,0.00,normal.)!

Slide 13

Slide 13 text

13 0,tcp,http,SF,215,45076,
 0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,1,
 0.00,0.00,0.00,0.00,1.00,0.00,0.00,
 0,0,0.00,0.00,0.00,0.00,0.00,0.00,
 0.00,0.00,normal.!

Slide 14

Slide 14 text

14 val dataAndLabel = rawData.map { line =>! val buffer = line.split(",").toBuffer! buffer.remove(1, 3)! val label = buffer.remove(buffer.length-1)! val vector = buffer.map(_.toDouble).toArray! (vector,label)! }! ! val data = dataAndLabel.map(_._1).cache()! ! import org.apache.spark.mllib.clustering._! val kmeans = new KMeans()! val model = kmeans.run(data)! ! val clusterLabelCount = dataAndLabel.map { ! case (data,label) => (model.predict(data),label) ! }.countByValue.toList.sorted.foreach { ! case ((cluster,label),count) => ! println(f"$cluster%1s$label%18s$count%8s") ! }!

Slide 15

Slide 15 text

15 0 back. 2203! 0 buffer_overflow. 30! 0 ftp_write. 8! 0 guess_passwd. 53! 0 imap. 12! 0 ipsweep. 12481! 0 land. 21! 0 loadmodule. 9! 0 multihop. 7! 0 neptune. 1072017! 0 nmap. 2316! 0 normal. 972781! 0 perl. 3! 0 phf. 4! 0 pod. 264! 0 portsweep. 10412! 0 rootkit. 10! 0 satan. 15892! 0 smurf. 2807886! 0 spy. 2! 0 teardrop. 979! 0 warezclient. 1020! 0 warezmaster. 20! 1 portsweep. 1! Terrible.  

Slide 16

Slide 16 text

16 Clustering,  Take  #1:  Choose  k  

Slide 17

Slide 17 text

17 import scala.math._! import org.apache.spark.rdd._! ! def distance(a: Array[Double], b: Array[Double]) = ! sqrt(a.zip(b).map(p => p._1 - p._2).map(d => d*d).sum)! def distToCentroid(datum: Array[Double], ! model: KMeansModel) = ! distance(model.clusterCenters(model.predict(datum)),! datum)! ! def clusteringScore(data: RDD[Array[Double]], k: Int) = {! val kmeans = new KMeans()! kmeans.setK(k)! val model = kmeans.run(data)! data.map(datum => distToCentroid(datum, model)).mean! }
 
 val kScores = (5 to 40 by 5).par.map(k => 
 (k, clusteringScore(data, k)))! !

Slide 18

Slide 18 text

18

Slide 19

Slide 19 text

19 (5, 1938.8583418059309)! (10,1614.7511288131)! (15,1406.5960973638971)! (20,1111.5970245349558)! (25, 905.536686115762)! (30, 931.7399112938756)! (35, 550.3231624120361)! (40, 443.10108628017787)!

Slide 20

Slide 20 text

20 kmeans.setRuns(10)! kmeans.setEpsilon(1.0e-6)! (30 to 100 by 10)! ! (30, 886.974050712821)! (40, 747.4268153420192)! (50, 370.2801596900413)! (60, 325.883722754848)! (70, 276.05785104442657)! (80, 193.53996444359856)! (90, 162.72596475533814)! (100,133.19275833671574)!

Slide 21

Slide 21 text

21 library(rgl)! ! clusters_data <- ! read.csv(pipe("hadoop fs -cat data/part-00000"))! clusters <- clusters_data[1]! data <- data.matrix(clusters_data[-c(1)])! ! random_projection <- 
 matrix(data = rnorm(3*ncol(data)), ncol = 3)! random_projection_norm <- 
 random_projection / 
 sqrt(rowSums(random_projection*random_projection))! ! projected_data <- 
 data.frame(data %*% random_projection_norm)! ! num_clusters <- nrow(unique(clusters))! palette <- rainbow(num_clusters)! colors = sapply(clusters, function(c) palette[c])! plot3d(projected_data, col = colors, size = 1)!

Slide 22

Slide 22 text

22

Slide 23

Slide 23 text

23 Clustering,  Take  #2:  Normalize  

Slide 24

Slide 24 text

Normaliza-on   24   x - µ •  “z  score”   •  Normalize  away  scale   differences   •  (Mean  doesn’t  maNer)   •  Assumes  normal-­‐ish   distribu-on   σ

Slide 25

Slide 25 text

25 val numCols = data.take(1)(0).length! val n = data.count! val sums = data.reduce((a,b) => 
 a.zip(b).map(t => t._1 + t._2))! val sumSquares = data.fold(new Array[Double](numCols))
 ((a,b) => a.zip(b).map(t => t._1 + t._2*t._2))! val stdevs = sumSquares.zip(sums).map { ! case(sumSq,sum) => sqrt(n*sumSq - sum*sum)/n ! }! val means = sums.map(_ / n)! ! def normalize(f:Array[Double]) = ! (f,means,stdevs).zipped.map((value,mean,stdev) => ! if (stdev <= 0) (value-mean) else (value-mean)/stdev)! val normalizedData = data.map(normalize(_)).cache()! ! val kScores = (50 to 120 by 10).par.map(k => 
 (k, clusteringScore(normalizedData, k)))!

Slide 26

Slide 26 text

26 (50, 0.008184436460307516)! (60, 0.005003794119180148)! (70, 0.0036252446694127255)! (80, 0.003448993315406253)! (90, 0.0028508261816040984)! (100,0.0024371619202127343)! (110,0.002273862516438719)! (120,0.0022075535103855447)!

Slide 27

Slide 27 text

27

Slide 28

Slide 28 text

28 Clustering,  Take  #3:  Categoricals  

Slide 29

Slide 29 text

29 …,tcp,…! …,udp,…! …,icmp,…! …,1,0,0,…! …,0,1,0,…! …,0,0,1,…!

Slide 30

Slide 30 text

30 val protocols = rawData.map(
 _.split(",")(1)).distinct.collect.zipWithIndex.toMap! ...! ! val dataAndLabel = rawData.map { line =>! val buffer = line.split(",").toBuffer! val protocol = buffer.remove(1)! val vector = buffer.map(_.toDouble)! ! val newProtocolFeatures = 
 new Array[Double](protocols.size)! newProtocolFeatures(protocols(protocol)) = 1.0! ...! vector.insertAll(1, newProtocolFeatures)! ...! (vector.toArray,label)! }!

Slide 31

Slide 31 text

31 (50, 0.09807063330707691)! (60, 0.07344136010921463)! (70, 0.05098421746285664)! (80, 0.04059365147197857)! (90, 0.03647143491690264)! (100,0.02384443440377552)! (110,0.016909326439972006)! (120,0.01610738339266529)! (130,0.014301399891441647)! (140,0.008563067306283041)!

Slide 32

Slide 32 text

32

Slide 33

Slide 33 text

33 Clustering,  Take  #4:  Labels,  Entropy  

Slide 34

Slide 34 text

34 0,tcp,http,SF,215,45076,
 0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,1, 0.00,0.00,0.00,0.00,1.00,0.00,0.00,0 ,0,0.00,0.00,0.00,0.00,0.00,0.00,0.0 0,0.00,normal.! Label  

Slide 35

Slide 35 text

Using  Labels  with  Entropy   35   •  Measures  mixed-­‐ness   •  Bad  clusters  have     very  mixed  labels   •  Func-on  of  cluster’s   label  frequencies,  p(x)   •  Good  clustering  =   low  entropy   - p log p Σ

Slide 36

Slide 36 text

36 def entropy(counts: Iterable[Int]) = {! val values = counts.filter(_ > 0)! val sum: Double = values.sum! values.map { v =>! val p = v / sum! -p * log(p)! }.sum! }! ! def clusteringScore(data: RDD[Array[Double]], 
 labels: RDD[String], ! k: Int) = {! ...! val labelsInCluster = 
 data.map(model.predict(_)).zip(labels).
 groupByKey.values! val labelCounts = labelsInCluster.map(! _.groupBy(l => l).map(t => t._2.length))! val n = data.count! labelCounts.map(m => m.sum * entropy(m)).sum / n! }!

Slide 37

Slide 37 text

37 (30, 1.0266922080881913)! (40, 1.0226914826265483)! (50, 1.019971839275925)! (60, 1.0162839563855304)! (70, 1.0108882243857347)! (80, 1.0076114958062241)! (95, 0.4731290640152461)! (100,0.5756131018520718)! (105,0.9090079450132587)! (110,0.8480807836884104)! (120,0.3923520444828631)!

Slide 38

Slide 38 text

38 72 ipsweep. 1! 72 normal. 85! 77 ipsweep. 6! 77 land. 9! 77 neptune. 1597! 77 normal. 4775! 77 portsweep. 2! 77 satan. 20! 90 buffer_overflow. 1! 90 guess_passwd. 45! 90 ipsweep. 36! 90 neptune. 4600! 90 normal. 598! 90 portsweep. 54! 90 satan. 6! 90 warezclient. 1! 93 ftp_write. 3! 93 loadmodule. 1! 93 multihop. 1! 93 normal. 4635! 93 phf. 4! 93 portsweep. 1! 93 spy. 1!

Slide 39

Slide 39 text

39 Detec-ng  an  Anomaly  

Slide 40

Slide 40 text

40 val kmeans = new KMeans()! kmeans.setK(95)! kmeans.setRuns(10)! kmeans.setEpsilon(1.0e-6)! val model = kmeans.run(normalizedData)! ! val distances = normalizedData.map(datum =>
 (distToCentroid(datum, model), datum))! ! val outliers = distances.top(100)(Ordering.by(_._1))! val threshold = outliers.last._1! ! def anomaly(datum: Array[Double], model: KMeansModel) = ! distToCentroid(normalize(datum), model) > threshold!

Slide 41

Slide 41 text

41 7290,tcp,telnet,SF,844,9016,
 0,0,5,0,0,1,5,1,0,22,0,0,0,0,0,0,1,1,
 0.00,0.00,0.00,0.00,1.00,0.00,0.00,
 131.00,40.00,0.01,0.02,0.01,0.05,0.00,
 0.00,0.00,0.00! anomaly!

Slide 42

Slide 42 text

From  Here  to  Produc-on?   42   •  Real  data  set!   •  Algorithmic   •  Distance  metrics   •  k-­‐means||  init   •  Algorithms   •  Uniquely  ID  data  points   •  Real-­‐Time     •  with  Spark  Streaming?   •  Con-nuous  Pipeline   •  Visualiza-on  

Slide 43

Slide 43 text

sowen@cloudera.com!