Slide 1

Slide 1 text

Unified  Monitoring  with   AppDynamics Dus$n  Whi*le   @AppDynamics

Slide 2

Slide 2 text

No content

Slide 3

Slide 3 text

52% of Fortune 500 firms since 2000 are gone

Slide 4

Slide 4 text

Login Flight Status Search Flight Purchase Mobile Big data SOA NOSQL Cloud Agile Web Application complexity is exploding

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

Up7me  is  cri7cal  for  enterprises  and  consumers

Slide 8

Slide 8 text

Performance  impacts  the  bo=om  line

Slide 9

Slide 9 text

How  fast  is  fast  enough? § Performance  is  key  to  a  great  user  experience     -­‐ Under  100ms  is  perceived  as  reac$ng  instantaneously   -­‐ A  100ms  to  300ms  delay  is  percep$ble   -­‐ 1  second  is  about  the  limit  for  the  user's  flow  of  thought  to  stay  uninterrupted   -­‐ Users  expect  a  site  to  load  in  2  seconds   -­‐ ADer  3  seconds,  40%  will  abandon  your  site.   -­‐ 10  seconds  is  about  the  limit  for  keeping  the  user's  a*en$on   § Modern  applica7ons  spend  more  7me  in  the  browser  than  on  the  server-­‐side

Slide 10

Slide 10 text

Who  cares  about  performance?

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

No content

Slide 13

Slide 13 text

No content

Slide 14

Slide 14 text

How  many  enterprise  monitoring  products  would  you  es7mate  your  IT  org  owns?

Slide 15

Slide 15 text

The  war  room  response  team

Slide 16

Slide 16 text

The  problems  with  monitoring  tools § Root  cause  isola7on  is  elusive  as  monitoring  lives  in  silos   -­‐ Infrastructure   -­‐ Hardware  +  Logs  +  Network  +  Storage  +  Containers  +  VMs   -­‐ Applica$on   -­‐ Load  Balancers  +  Web  Servers  +  App  Servers  +  File  Servers   -­‐ Databases  +  Caches  +  Queues  +  Third  party  services   -­‐ End  Users   -­‐ CDN   -­‐ Web  +  Mobile   § Metrics  lack  the  context  of  impact

Slide 17

Slide 17 text

Monitoring  lacks  the  business  context

Slide 18

Slide 18 text

The  struggle  of  modern  monitoring § Organiza7ons  focus  on  availability  +  raw  metrics  and  not  end  user  experience  /  impact   § Complex  apps  built  on  micro-­‐services  in  containers  living  in  elas7c  cloud  environments   § Too  many  graphs  from  too  many  metrics   -­‐ Understanding  the  signal  from  the  noise  is  difficult   -­‐ No  topology  awareness.  No  transac$onal  visibility.  No  root  cause.   § Alert  storming  with  too  many  false  alarms   -­‐ Aler$ng  is  based  on  sta$c  thresholds  —  lacks  intelligent  anomaly  detec$on  +  correla$on   -­‐ Lack  of  historical  context  or  rela$onships  between  metrics  and  events   § A  single  pane  of  glass  across  the  performance  stakeholders   -­‐ Not  able  to  quan$fy  the  impact  of  performance  degrada$on  —  Not  self-­‐service

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

Context  is  king:  Unified  Monitoring

Slide 21

Slide 21 text

Breaking  down  the  silos

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

Situa7on-­‐aware  data  and  views Web Ops App Owner Server Admin DBA IT Ops

Slide 26

Slide 26 text

No content

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

No content

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

Monitor  the  end  user  experience § Real  User  Monitoring  vs  Synthe7c  Monitoring   -­‐ Synthe$c  tests  provide  24/7  assurance   -­‐ RUM  provides  insights  into  actual  users   -­‐ Mobile  device  segmenta$on   -­‐ Unexpected  behavior/trends   § Real  User  Monitoring   -­‐ Naviga$on  Timing  API   -­‐ Resource  Timing  API   -­‐ User  Timing  API   -­‐ Javascript  Errors

Slide 32

Slide 32 text

Metrics  +  logs  help,  but  intelligence  is  be=er

Slide 33

Slide 33 text

No content

Slide 34

Slide 34 text

No content

Slide 35

Slide 35 text

No content

Slide 36

Slide 36 text

Moving  from  reac7ve  to  proac7ve § Resolving  before  the  red  =  fixing  in  the  yellow   -­‐ Automa$c  runbook  automa$on  integrates  with  your  devops  stack   § Intelligent  anomaly  detec7on  across  end-­‐user,  applica7on,  database,  server  metrics   -­‐ Automa$cally  calculates  dynamic  baselines  for  all  of  your  metrics,  which,  based  on  actual   usage,  define  what  is  "normal"  for  each  metric   -­‐ Smart  aler$ng  based  on  any  devia$on  from  the  baselines   § Understand  trends  and  pa=erns  in  failures  -­‐  automa7cally  learn  from  the  past   -­‐ Understand  what  are  the  most  impacaul  issues  to  resolve   -­‐ Ocen  $mes  external  services  are  the  root  cause  with  limited  visibility   -­‐ Enforce  SLAs  

Slide 37

Slide 37 text

No content

Slide 38

Slide 38 text

Moving  from  reac7ve  to  proac7ve -­‐ Automa7c  discovery  of  environment  and  applica7on  changes   -­‐ New  APIs,  transac$ons,  services,  clouds   § Leverage  analy7cs  to  be  smarter  about  using  the  data  you  already  have   -­‐ System  Logs,  Metrics  from  events  and  infrastructure  stats   -­‐ Transac$ons  with  request  parameters  +  User  state  from  cookies/sessions   § Performance  monitoring  isn’t  just  about  the  tech   -­‐ Visibility  into  the  impact  of  business  -­‐  aler$ng  when  revenue  is  down

Slide 39

Slide 39 text

AppDynamics  leverages  and  embraces  open-­‐source

Slide 40

Slide 40 text

Leading  companies  invest  in  performance § Etsy  =  Kale  =  Statsd  +  Skyline  +  Oculus  (stats  collec$on  +  anomaly  detec$on/correla$on)   § Nealix  =  PCP  +  Vector  +  Servo  +  Atlas  (dashboards,  data  collec$on,  root  cause  analysis)   § Twi*er  =  Zipkin  (distributed  tracing)

Slide 41

Slide 41 text

Recommenda7ons § Treat  performance  as  a  feature     -­‐ Create  a  performance  budget  with  milestones,  speed  index,  page  speed   -­‐ Capacity  plan  and  load  test  the  server-­‐side     -­‐ Op$mize  and  performance  test  the  client-­‐side     § Monitor  performance  in  development  and  produc7on     -­‐ Instrument  everything   -­‐ Measure  the  difference  of  every  change   -­‐ Understand  how  failures  impact  performance   § Make  monitoring  cri7cal  and  test  in  your  con7nuous  delivery  process     § Connect  the  exec/dev/ops  performance  perspec7ves  to  align  on  business  impact

Slide 42

Slide 42 text

Go  back  and  inves7gate  how  your  company   can  break  down  the  monitoring  silos  and  be   more  impac]ul  with  applica7on  intelligence.

Slide 43

Slide 43 text

Ques7ons?

Slide 44

Slide 44 text

Thank  you!  Enjoy  the  rest  of  Velocity  2015.

Slide 45

Slide 45 text

http://www.appdynamics.com/