re:Invent 2019: What is new in AWS?

8f43892395260c6ad14618987099ddcc?s=47 Serhat Can
December 16, 2019

re:Invent 2019: What is new in AWS?

In this presentation, we go over the new features and products announced at re:Invent 2019.


Serhat Can

December 16, 2019


  1. © 2019, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. What’s New in AWS
 re:Invent 2019 Halil BAHADIR - Manager, Solutions Architect at AWS Serhat CAN - Technical Evangelist at Atlassian
  2. What’s new in Compute

  3. 270+ instances across 42 instance types 270 + 2017

  4. Categories Capabilities Options Broadest and deepest platform choice General purpose

    Burstable Compute intensive Memory intensive Storage (High I/O) Dense storage GPU compute Graphics intensive Elastic Block Store Elastic Inference 270 + instance types for virtually every workload and business need Choice of processor
 (AWS, Intel, AMD) Fast processors
 (up to 4.0 GHz) High memory footprint
 (up to 12 TiB) Instance storage
 (HDD and NVMe) Accelerated computing
 (GPUs and FPGA) Networking
 (up to 100 Gbps) Bare Metal Size 
 (Nano to 32xlarge)
  5. Broadest choice of processors AMD Rome Second generation of Intel®

    Xeon processor Graviton
  6. Announcing AWS Graviton2 Processor First Arm-based processor 
 available in

    major cloud Built with 64-bit Arm Neoverse 
 cores with AWS-designed silicon using 7 nm manufacturing technology Up to 16 vCPUs,10 Gbps enhanced networking, 3.5 Gbps EBS bandwidth Built on 64-bit Arm Neoverse cores 
 with AWS-designed silicon using 16 nm manufacturing technology Up to 64 vCPUs, 25 Gbps enhanced networking, 18 Gbps EBS bandwidth 7x performance, 4x compute cores, 
 and 5x faster memory Graviton Processor Graviton2 Processor Enabling the best price/performance for your cloud workloads
  7. Announcing Graviton2 based instances Coming in 2020 Available 

    Preview M6g C6g R6g Up to 40% better price-performance for general purpose, compute intensive, and memory intensive workloads. Built for: General-purpose workloads such as application servers, mid-size data stores, and microservices. Built for: Compute intensive applications such as HPC, video encoding, gaming, and simulation workloads. Built for: Memory intensive workloads such as open-source databases, or in-memory caches. Local NVMe-based SSD storage options will also be available in general purpose (M6gd), compute-optimized (C6gd), and memory-optimized (R6gd) instances
  8. Summary • Amazon EC2 M6g, C6g, R6g instances and their

    disk variants powered by AWS Graviton2 processors provide up to 40% improved price/performance over comparable x86-based instances. • These Graviton2-powered instances support a broad spectrum of workloads including application servers, open source databases, in-memory caches, microservices, gaming servers, electronic design automation, high-performance computing, and video encoding. • Most applications built on Linux distributions and open source software can run easily on multiple processor architectures and are well suited for the new instance types. • These instances are supported by several Linux distributions and an extensive ecosystem of Independent Software Vendors (ISVs). • M6g instances are available now in preview.
  9. Inference account for majority of machine learning Infrastructure costs %

    Infrastructure Cost Machine Learning Training (<10%) Machine Learning Inference (>90%)
  10. Optimizing ML performance with a custom chip ASIC Chart for

    illustrative purposes GPU CPU Performance/WATT Applications
  11. Featuring
 AW S Inferentia ML inference deployment options on Amazon

    EC2 Custom chip
 EC2 Inf1 instances GPU based
 EC2 G4 instances CPU based
 EC2 C5 instances Applications that leverage common ML frameworks Applications that
 require access to CUDA,
 CuDNN or TensorRT libraries Small models and low sensitivity to performance Powered by
 AWS Inferentia Amazon EC2 G4 instances based
 on NVIDIA T4 GPUs Intel Skylake CPUs
 Support for AVX-512/
 VNNI instruction set Best price/performance for
 ML inferencing in the cloud Up to 40% lower cost per inference and up to 3x higher throughput than G4 instances Available today! Launched! Launched!
  12. EC2 Inf1 instances is built from the ground up by

    AWS High performance Low cost AWS Neuron AWS Custom 2nd Gen Intel Xeon Scalable Processors AWS Inferentia AWS Nitro
  13. EC2 Inf1 instances are optimized for ML inferencing Object detection

    Natural language processing Personalization Speech recognition Image processing Fraud detection
  14. Announcing EC2 Image Builder

  15. EC2 Image Builder – benefits Quickly and easily automate the

    creation, management, and deployment of up-to-date and compliant “golden” VM images Improve service uptime by testing images before use in production Generate automation to build VM images with a GUI Reduce cost of building secure, compliant & up-to-date images
  16. EC2 Image Builder – benefits Build golden VM images for

    use on AWS and on- premises Enforce policies on VM image usage across AWS accounts Works for both Windows and Linux
  17. Start with a source image Customize software and configurations Secure

    image with AWS-provided or custom hardening templates Test image with AWS provided or custom tests Distribute “golden” image to selected AWS regions All EC2 Image Builder operations run in your AWS account EC2 Image Builder – how it works Repeat when updates are pending
  18. AWS Compute Optimizer Recommends optimal instances for EC2 and and

    EC2 Auto Scaling groups from 140+ instances from M, C, R, T, and X families
 Lower costs and improve workload performance Applies insights from millions of workloads to make recommendations Saves time comparing and selecting optimal resources for your workload
  19. © 2019, Amazon Web Services, Inc. or its Affiliates. AWS

  20. Same AWS-designed infrastructure as in 
 AWS data centers (built

    on AWS Nitro System) Fully managed, monitored, and operated by AWS as if in AWS Regions Single pane of management in the cloud providing the same APIs and tools as 
 in AWS Regions AWS Outposts: Bringing AWS on-premises
  21. AWS Outposts Rack • Industry standard 42U rack • Fully

    assembled, ready to be rolled 
 into final position • Installed by AWS, simply plugged into power and network • Centralized redundant power conversion unit and DC distribution system for 
 higher reliability, energy efficiency, 
 easier serviceability • Redundant active components including 
 top of rack switches and hot spare hosts Dimensions • 24” Wide • 48” Deep • 80” Tall
  22. Available in 2 Variants VMware APIs and services to 

    leverage existing skills, automation, and governance policies For customers running VMware 
 SDDC on-premises AWS APIs, services, and features 
 as in the AWS cloud EC2 and EBS with support for 
 services including RDS, ECS, EKS, 
 EMR, ALB, others Native AWS VMware Cloud on AWS
  23. • Compute & Storage—Amazon EC2 instances 
 and EBS volumes

    • Networking—Amazon VPC • Database—Amazon Relational Database 
 Service (RDS) • Containers—Amazon Elastic Container Service 
 (ECS) & Amazon Elastic Kubernetes Service (EKS) • Data Processing—Amazon Elastic Map Reduce (EMR) Run AWS services locally
  24. EC2 Auto Scaling Groups AWS CloudFormation CloudWatch CloudTrail Elastic BeanStalk

    Cloud9 and more… With the same AWS APIs & tools as in the AWS Region
  25. AWS Local Zones • New type of AWS infrastructure deployment

    • Places compute, storage, database, and other services closer to customers • For demanding applications that require single-digit latencies AWS infrastructure at the edge Local compute, storage, database, and other services Connect to services in AWS Regions Deliver new low latency apps NEW
  26. AWS Wavelength • Extends AWS infrastructure to 5G networks •

    Run latency-sensitive portions of applications in “Wavelength Zones,” and seamlessly connect to the rest of your applications and the full breadth of services in AWS • Same AWS APIs, tools, and functionality • Global partner network NEW
  27. AWS Wavelength Benefits Low latency/
 high bandwidth Consistent 

    experience Same AWS benefits Ubiquity
  28. Wavelength Zone Same AWS-designed infrastructure as in 
 AWS data

    centers Hosted in a site within a CSP partner network Integrated into the CSP 5G Network Managed and monitored from an AWS region
  29. AWS Nitro Enclaves Create isolated compute environments to further protect

    and securely process highly sensitive data such as personally identifiable information (PII), healthcare, financial, and intellectual property data within their Amazon EC2 instances. Nitro Enclaves uses the same Nitro Hypervisor technology that provides CPU and memory isolation for EC2 instances. Enclaves are virtual machines attached to EC2 instances that come with no persistent storage, no administrator or operator access, and only secure local connectivity to customers EC2 instance. NEW
  30. What’s new in Serverless

  31. AWS container services landscape Management Deployment, Scheduling, Scaling & Management

    of containerized applications Hosting Where the containers run Amazon Elastic Container Service Amazon Elastic Kubernetes Service Amazon EC2 AWS Fargate Image Registry Container Image Repository Amazon Elastic Container Registry
  32. Managed by AWS No EC2 Instances to provision, scale or

    manage Elastic Scale up & down seamlessly. Pay only for what you use Integrated with the AWS ecosystem: VPC Networking, Elastic Load Balancing, IAM Permissions, CloudWatch and more. Run Kubernetes pods or ECS tasks. AWS Fargate
  33. EKS for Fargate (Serverless Kubernetes) Bring existing pods Production ready

    Right-Sized and Integrated You don’t need to change your existing pods. Fargate works with existing workflows and services that run on Kubernetes. Launch ten or tens of thousands 
 of pods in seconds. Easily run pods across multiple AZs for high-availability. Only pay for the resources you need to run your pods. Includes native AWS integrations for networking, and security. Fargate runs tens of millions of containers for AWS customers every week
  34. Provisioned Concurrency for AWS Lambda Provisioned Concurrency keeps functions initialized

    and hyper-ready to respond in double-digit milliseconds. Customers fully control when or how long to enable Provisioned Concurrency. Taking advantage of Provisioned Concurrency requires no changes to your code.. Serverless LEARN MORE CON213-L: Leadership session: Using containers and serverless to accelerate modern application development. Wednesday, 9:15am Ideal for latency-sensitive applications You fully control when to enable it No changes required to your code Fully serverless PREVIEW NEW
  35. Amazon RDS Proxy 
 Fully managed, highly available database proxy

    Supports new scale of serverless application connections Pools and shares database connections Preserve connections during database failovers Manages DB credentials with Secrets Manager and IAM Fully managed—No provisioning, patching, management RDS Proxy Applications RDS 
 Database Instance Connection Pooling PREVIEW NEW
  36. AWS Gateway HTTP APIs • Save up to 70% compared

    to REST APIs
 HTTP APIs are optimized for building APIs that proxy to AWS Lambda functions or HTTP backends, making them ideal for serverless workloads. • Significantly faster
 Up to 50% latency reduction. HTTP APIs only support API proxy functionality. For customers who want API proxy functionality and API management features in a single solution, they can use REST APIs from Amazon API Gateway. Serverless PREVIEW NEW
  37. Amazon EventBridge with Schema Registry Why? As customer’s applications grows

    and more teams write custom events, there is more effort required to find events and their structure as well as to write code to react to those events. What? The Amazon EventBridge schema registry stores event structure - or schema - in a shared central location and maps those schema to code for Java, Python, and Typescript so it’s easy to use events as objects in their code. How? Schema from their event bus can be automatically added to the registry through the schema discovery feature. Customers can connect to and interact with schema registry from the AWS console, APIs, or through the SDK Toolkits for Jetbrains (Intellij, PyCharm, Webstorm, Rider) and VS Code. Serverless PREVIEW NEW
  38. Other AWS Lambda announcements • Parallelization Factor for Kinesis and

    DynamoDB Event Sources • Allows you to process one shard of a Kinesis or DynamoDB data stream with more than one Lambda invocation simultaneously
 • Failure-Handling Features for Kinesis and DynamoDB Event Sources • Allow you to customize responses to data processing failures and build more resilient stream processing applications • Destinations for Asynchronous Invocations • Allows you to gain visibility to asynchronous invocation result and route the result to an AWS service without writing code • Language support for Java 11, Node.js 12, Python 3.8
 • SQS FIFO as an event source Serverless
  39. What’s new in 
 Machine Learning and Artificial Intelligence?

  40. Amazon SageMaker Machine Learning for every developer & data scientist

  41. Amazon SageMaker Build, Train, Deploy Machine Learning Models Quickly at

    Scale Amazon SageMaker Ground Truth Algorithms & Frameworks Notebooks Training & Tuning Deployment & Hosting Reinforcement Learning ML Marketplace Neo
  42. Amazon SageMaker Addressing challenges to machine learning First fully integrated

    development environment (IDE) for machine learning Amazon SageMaker Studio Enhanced notebook experience with quick-start & easy collaboration Amazon SageMaker 
 Notebooks (Preview) Automatic debugging, analysis, and alerting Amazon SageMaker Debugger Experiment management system to organize, track & compare thousands of experiments Amazon SageMaker 
 Experiments Model monitoring to detect deviation in quality & take corrective actions Amazon SageMaker Model Monitor Automatic generation of ML models with 
 full visibility & control Amazon SageMaker Autopilot
  43. Machine learning is iterative involving dozens of tools and hundreds

    of iterations Multiple tools needed for different phases of the ML workflow Lack of an integrated experience Large number of iterations Cumbersome, lengthy processes, resulting in loss of productivity + + =
  44. Introducing Amazon SageMaker Studio The first fully integrated development environment

    (IDE) for machine learning NEW Organize, track, and compare thousands of experiments Easy experiment management Share notebooks without tracking code dependencies Collaboration at scale Get accurate models with full visibility & control without writing code Automatic model generation Automatically debug errors, monitor models, & maintain high quality Higher quality ML models Code, build, train, deploy, & monitor in a unified visual interface Increased productivity
  45. None
  46. Data science and collaboration needs to be easy Setup and

    manage resources Collaboration across multiple data scientists Different data science projects have different resource needs Managing notebooks and collaborating across multiple data scientists is highly complicated + + =
  47. Introducing Amazon SageMaker Notebooks 
 (Available in Preview) NEW Access

    your notebooks in seconds with your corporate credentials Fast-start shareable notebooks Administrators manage access and permissions Fully managed and secure Share your notebooks as a URL with a single click Easy collaboration Dial up or down compute resources (Coming soon) Flexibility Easy access with Single Sign-On (SSO) Start your notebooks without spinning up compute resources No explicit setup
  48. None
  49. None
  50. Data Processing and Model Evaluation involves a lot of operational

    overhead Building and scaling infrastructure for data processing workloads is complex Use of multiple tools or services implies learning and implementing new APIs All steps in the ML workflow need enhanced security, authentication and compliance Need to build and manage tooling to run large data processing and model evaluation workloads + + =
  51. Introducing Amazon SageMaker Processing NEW Analytics jobs for data processing

    and model evaluation Use SageMaker’s built-in containers or bring your own Bring your own script for feature engineering Custom processing Achieve distributed processing for clusters Fully managed Your resources are created, configured, & terminated automatically Automatic creation & termination Leverage SageMaker’s security & compliance features Security and compliance Container support
  52. Managing trials and experiments is cumbersome Thousands of experiments Hundreds

    of parameters per experiment Compare and evaluate Very cumbersome and error prone + + =
  53. NEW Introducing Amazon SageMaker Experiments Organize, track, and compare training

    experiments Tracking at scale Visualization Metrics and logging Fast Iteration Track parameters & metrics across experiments & users Custom organization Organize experiments by teams, goals, & hypotheses Easily visualize experiments and compare Log custom metrics using the Python SDK & APIs Quickly go back & forth & maintain high-quality
  54. None
  55. Debugging and profiling deep learning is painful Large neural networks

    with many layers Data capture with many connections Additional tooling for analysis and debug Extraordinarily difficult to inspect, debug, and profile the ‘black box’ + + =
  56. NEW Introducing Amazon SageMaker Debugger Data analysis & debugging Relevant

    data capture Automatic error detection Improved productivity with alerts Visual analysis and debugging Analyze & debug data with no code changes Data is automatically captured for analysis Errors are automatically detected based on rules Take corrective action based on alerts Visually analyze & debug from SageMaker Studio Analysis & debugging, explainability, and alert generation
  57. None
  58. Deploying a model is not the end. You need to

    continuously monitor models in production and iterate Concept drift due to divergence of data Model performance can change due to unknown factors Continuous monitoring involves a lot of tooling and expense Model monitoring is cumbersome but critical + + =
  59. NEW Introducing Amazon SageMaker Model Monitor Automatic data collection Continuous

    Monitoring CloudWatch Integration Data is automatically collected from your endpoints Automate corrective actions based on Amazon CloudWatch alerts Continuous monitoring of models in production Visual Data analysis Define a monitoring schedule and detect changes in quality against a pre-defined baseline See monitoring results, data statistics, and violation reports in SageMaker Studio Flexibility with rules Use built-in rules to detect data drift or write your own rules for custom analysis
  60. None
  61. Successful ML requires complex, hard to discover combinations Largely explorative

    & iterative Requires broad and complete knowledge of ML domain Lack of visibility Time consuming, error prone process, even for ML experts + + = of algorithms, data, parameters
  62. NEW Introducing Amazon SageMaker Autopilot Quick to start Provide your

    data in a tabular form & specify target prediction Automatic model creation Get ML models with feature engineering & automatic model tuning automatically done Visibility & control Get notebooks for your modelswith source code Automatic model creation with full visibility & control Recommendations & Optimization Get a leaderboard & continue to improve your model
  63. None
  64. Amazon SageMaker Addressing challenges to machine learning First fully integrated

    development environment (IDE) for machine learning Amazon SageMaker Studio Enhanced notebook experience with quick-start & easy collaboration Amazon SageMaker 
 Notebooks (Preview) Automatic debugging, analysis, and alerting Amazon SageMaker Debugger Experiment management system to organize, track & compare thousands of experiments Amazon SageMaker 
 Experiments Model monitoring to detect deviation in quality & take corrective actions Amazon SageMaker Model Monitor Automatic generation of ML models with 
 full visibility & control Amazon SageMaker Autopilot
  65. Build, train, and deploy machine learning models quickly at scale

    Amazon SageMaker Studio IDE Amazon SageMaker Ground Truth Algorithms and Frameworks SageMaker Notebooks SageMaker Experiments Training and Tuning Deployment and Hosting Reinforcement Learning ML Marketplace SageMaker Debugger SageMaker Autopilot SageMaker Model Monitor NEW! NEW! NEW! NEW! NEW! NEW! Neo
  66. Using Kubernetes for ML is hard to manage and scale

    Build and manage services within Kubernetes cluster for ML Make disparate open-source libraries and frameworks work together in a secure and scalable way Requires time and expertise from infrastructure, data science, and development teams Need an easier way to use Kubernetes for ML + + =
  67. Train, tune, and deploy models in SageMaker Orchestrate ML workloads

    from your Kubernetes environments Create pipelines and workflows in Kubernetes Fully managed infrastructure in SageMaker Introducing Amazon SageMaker Operators for Kubernetes Kubernetes customers can now train, tune, & deploy models in Amazon SageMaker NEW
  68. Deploying models at scale is hard to manage and not

    cost-effective Large number of per-user models or similar models Different access patterns for all models – some highly accessed, others infrequently accessed Need to have all models in production and available to serve inferences at low latency High deployment costs and challenges in managing scale + + =
  69. Introducing Amazon SageMaker Multi-model Endpoints Store trained models in Amazon

    S3 Serve all models from a single endpoint Concurrently invoke multiple models on the same endpoint Your memory is managed based on traffic Get improved endpoint & instance utilization Deploy and manage thousands of models Easy to deploy & manage models Deploy multiple models on an endpoint Invoke target model Automatic memory handling Significant cost savings
  70. Training ML models can get expensive Training ML models can

    last anywhere between few minutes to weeks Want to use EC2 Spot instances, but they can get interrupted ML model training needs to be unaffected by interruptions Need to build complex tooling to use Spot instances for Training ML Models + + =
  71. Introducing Amazon SageMaker Managed Spot Training Save up to 90%

    in training costs Visualize your cost savings for each trainin job Save training costs compared to Amazon EC2 On-Demand instances Spot capacity is managed & interruptions are automatically handled Get support for built-in and your own algorithms & frameworks All SageMaker training capabilities No more interruptionsSupport for algorithms & frameworks Full visibility Take advantage of Automatic Model Tuning & Reinforcement Up to 90% savings
  72. Introducing Amazon Fraud Detector Identify potentially fraudulent online activities such

    as online payment fraud and the creation of fake accounts PREVIEW NEW Step 1: Upload your historical fraud datasets to Amazon S3
 Step 2: Select from pre-built fraud detection model templates 
 Step 3: The model template uses your historical data as input to build a custom model. The model template inspects and enriches data, performs feature engineering, selects algorithms, trains and tunes your model, and hosts the model 
 Step 4: Create rules to either accept, review, or collect more information based on model predictions 
 Step 5: Call the Amazon Fraud Detector API from your online application to receive real-time fraud predictions and take action based on your configured detection rules.
  73. Introducing Amazon CodeGuru New machine learning service to automate code

    reviews and 
 identify your most expensive line of code PREVIEW NEW Find your most expensive lines of code Trained on decades of knowledge and experience Catch the code issue today – don't wait to get paged
  74. Introducing Amazon Kendra Highly accurate and easy to use enterprise

    search service 
 that’s powered by machine learning PREVIEW NEW
  75. What’s new in Storage & Analytics & Databases

  76. Amazon S3 Access Points Simplify managing data access at scale

    for shared data sets on Amazon S3. With S3 Access Points, you can easily create hundreds of access points per bucket, each with a name and permissions customized for the application. This represents a new way of provisioning access to shared data sets. GA NEW
  77. AWS Access Analyzer for S3—New An S3 capability to generate

    comprehensive findings if your resource policies grant public or cross-account access Continuously identify resources with overly broad permissions across your entire AWS organization Resolve findings by updating policies to protect your resources from unintended access before it occurs, or archive findings for intended access Access Analyzer for S3
  78. Our portfolio
 Broad and deep portfolio, purpose-built for builders S3/Glacier

    Glue ETL & Data Catalog Lake Formation Data Lakes Database Migration Service | Snowball | Snowmobile | Kinesis Data Firehose | Kinesis Data Streams | Managed Streaming for Kafka Data Movement Data Lake Analytics Redshift Data warehousing EMR Hadoop + Spark Kinesis Data Analytics 
 Real time Elasticsearch Service Operational Analytics Athena Interactive analytics NEW NEW NEW AQUA EMR on Outposts UltraWarm Business Intelligence & Machine Learning Data Exchange Data exchange NEW QuickSight Visualizations SageMaker ML Comprehend NLP Transcribe Speech-to-text Textract Extract text Personalize Recommendation Forecast Forecasts Translate Translation CodeGuru Code reviews Kendra Enterprise search NEW NEW Analytics Redshift Data warehousing EMR Hadoop + Spark Kinesis Data Analytics 
 Real time Elasticsearch Service Operational Analytics Athena Interactive analytics NEW NEW NEW AQUA EMR on Outposts UltraWarm Databases RDS MySQL, PostgreSQL, MariaDB, Oracle, SQL Server, RDS on VMware Aurora MySQL, PostgreSQL DynamoDB Key value, Document ElastiCache
 Redis, Memcached Neptune Graph Timestream Time Series QLDB Ledger Database Managed Apache Cassandra Service Wide column NEW DocumentDB Document NEW NEW RDS Proxy RDS on Outposts RDS MySQL, PostgreSQL, MariaDB, Oracle, SQL Server, RDS on VMware Aurora MySQL, PostgreSQL DynamoDB Key value, Document ElastiCache
 Redis, Memcached Neptune Graph Timestream Time Series QLDB Ledger Database Analytics Databases Managed Blockchain Blockchain Templates Blockchain Managed Apache Cassandra Service Wide column NEW DocumentDB Document Redshift Data warehousing EMR Hadoop + Spark Kinesis Data Analytics 
 Real time Elasticsearch Service Operational Analytics Athena Interactive analytics NEW NEW NEW NEW NEW AQUA EMR on Outposts UltraWarm RDS Proxy RDS on Outposts
  79. Data warehousing: Amazon Redshift
 Best performance, most scalable 3x faster

    with RA3* 10x faster with AQUA* Adds unlimited compute capacity on-demand to meet unlimited concurrent access Lowest cost Cost-optimized workloads 
 by paying compute and 
 storage separately 1/10th cost of Traditional 
 DW at $1000/TB/year Up to 75% less than other cloud data warehouses & predictable costs Data lake & 
 AWS integration Analyze exabytes of data across data warehouse, data lakes, and operational database Query data across various analytics services Most secure 
 & compliant AWS-grade security (eg. VPC, encryption with KMS, CloudTrail) All major certifications such 
 as SOC, PCI, DSS, ISO, 
 FedRAMP, HIPPA First and most popular cloud data warehouse *vs other cloud DWs
  80. Amazon Redshift on RA3 instances 
 Optimize your data warehouse

    by paying for compute and storage separately Delivers 3x the performance of existing cloud DWs DS2 customers can migrate and get 2x performance 
 and 2x storage for the same cost Automatically scales your DW storage capacity Supports workloads up to 8 PB (compressed) COMPUTE NODE
 (RA3) SSD Cache Managed storage $/node/hour $/TB/month GA NEW
  81. AQUA—Advanced Query Accelerator
 Redshift runs 10x faster than any other

    cloud data warehouse without increasing cost Compute Node Compute Node Compute Node Compute Node Parallel execution Storage Node Storage Node Storage Node Storage Node Multi-tenant locally attached storage Custom ASIC and FPGA Custom ASIC and FPGA Custom ASIC and FPGA Custom ASIC and FPGA 100% compatible with the current version of Redshift AQUA brings compute to the storage layer 
 so data doesn’t have to move back and forth High-speed cache on top of S3 scales out to process data in parallel across many nodes AWS custom-designed analytics processors accelerate data compression, encryption, and data processing COMING 
 IN 2020 NEW Amazon S3
  82. Amazon EMR
 Easily Run Spark, Hadoop, Hive, Presto, HBase, and

    more big data apps on AWS Low cost 50–80% reduction in costs with EC2 Spot and Reserved Instances Per-second billing for flexibility Use S3 storage Process data in S3
 securely with high performance using the EMRFS connector Latest versions Updated with latest open source frameworks within 30 days Fully managed no cluster setup, node provisioning, cluster tuning Easy
  83. Performance Improvements in Spark for Amazon EMR
 Performance optimized runtime

    for Apache Spark, 2.6x faster performance at 1/10th the cost *Based on TPC-DS 3 TB Benchmarking running 6 node C4x8 extra large clusters and EMR 5.28, Spark 2.4 Runtime total on 104 queries (seconds— lower is better) t runtime) r runtime) h runtime) 0 7.000 14.000 21.000 28.000 10164 16478 26478 Runtime optimized for Apache Spark performance 100% compliant with Apache Spark APIs Best performance 2.6x faster than Spark with EMR without runtime 1.6x faster than 3rd party Managed Spark (with their runtime) Lowest price 1/10th the cost of 3rd party Managed Spark (with their runtime) NEW
  84. Amazon Athena Federated Query
 Run SQL queries on data spanning

    multiple data stores Redshift Data warehousing ElastiCache
 Redis Aurora MySQL, PostgreSQL DynamoDB Key value, Document DocumentDB Document On-premises SQL S3/Glacier Run connectors in AWS Lambda: no servers to manage Run SQL queries on relational, non-relational, object, 
 or custom data sources; in the cloud or on-premises Open Source Connectors for common data sources Build connectors to custom data sources PREVIEW NEW
  85. UltraWarm for Amazon Elasticsearch Service 
 A new warm storage

    tier for Elasticsearch service Kibana Dashboard Amazon Elasticsearch Service domain Data Node Data Node Data Node Data Node Application Load Balancer Seamlessly extends Elasticsearch service 90% lower cost Scale up to 3PB per domain Analyze years of operational data Amazon S3 UltraWarm Node UltraWarm Node UltraWarm Node Active 
 Master Node Backup
 Master Node Backup
 Master Node Queries PREVIEW NEW
  86. Data exchange: AWS Data Exchange
 Easily find and subscribe to

    3rd-party data in the cloud
 Efficiently access 
 3rd party data Simplifies access to data: No need to receive physical media, manage FTP credentials, or integrate with different APIs Minimize legal reviews and negotiations Quickly find diverse 
 data in one place >1,000 data products >80 data providers including include Dow Jones, Change Healthcare, Foursquare, Dun & Bradstreet, Thomson Reuters, Pitney Bowes, Lexis Nexis, and Deloitte Easily analyze data Download or copy data to S3 Combine, analyze, and model with existing data Analyze data with EMR, Redshift, Athena, and AWS Glue GA NEW
  87. Amazon QuickSight 
 First cloud-native serverless BI with pay-per-session pricing

    & ML insights for everyone Elastic Scaling Auto-scale 10 to 10K+ users in minutes Pay-as-you-go Serverless Create dashboards in minutes Deploy globally without provisioning a single server Native AWS Secure, Private access to AWS data Integrated S3 data lake permissions through AWS IAM API Support Programmatically onboard users and manage content Easily embed in your apps NEW
  88. Machine learning in Amazon QuickSight Anomaly Detection
 Discover unexpected trends

    and outliers against millions of business metrics Forecasting
 Machine learning forecasting with point and click simplicity ML Predictions
 Visualize and build predictive dashboards with SageMaker models Auto Narratives
 Summarize your business metrics in plain language NEW
  89. ML predictions in Amazon QuickSight (preview) 
 AWS/On-premise data sources

    • Excel • CSV • MySQL • PostgreSQL • Maria DB • Presto • Spark • SQL Server • Amazon Redshift • RDS • S3 • Athena • Aurora • EMR • Snowflake • Teradata • Salesforce • Square • Adobe Analytics • Jira • ServiceNow • Twitter • GitHub 1 Connect to any data: 
 Data lakes, SQL engines, 3rd party applications and on- premises databases 2 Select an ML model:
 Create models with Amazon SageMaker AutoPilot, existing custom models and packaged models from AWS Marketplace. Custom Models QuickSight Amazon SageMaker AutoPilot Models AWS Marketplace 3 Visualize and share:
 Analyze results, create visualizations, build dashboards / email reports and share to business stakeholders NEW
  90. Easily embed analytics in your own tools 
 Powered by

    QuickSight APIs and flexible customization. Entirely serverless. Deploy and manage dashboards + data via APIs Match your application UI with QuickSight Themes Embed dashboards in apps without servers • Fast, consistent performance • Pay-per-session Automatically scale to 10s of 1000s of users • No server management • No scripting NEW
  91. Amazon Managed (Apache) Cassandra Service
 Scalable, highly available, and managed

    Cassandra-compatible database service
 No need to provision, configure, and operate large Cassandra clusters or add and remove 
 nodes manually No servers to manage Single-digit millisecond performance Scale tables up and down automatically based on application traffic Virtually unlimited 
 throughput and storage Single-digit millisecond performance at scale Apache 
 Cassandra-compatible Use the same application code, licensed drivers, and tools 
 built on Cassandra Simple migration Simple migration to Managed Cassandra Service for Cassandra databases on premises or on EC2 PREVIEW NEW
  92. ML in Amazon Aurora, Athena, and QuickSight
 Bringing machine learning

    to databases, analytics and BI Incorporate ML into databases, analytics and BI Integrated with Amazon SageMaker & Comprehend ML predictions using standard SQL statements No ML expertise required Reduces time to getting predictions out of models S3 Comprehend Natural language processing Amazon SageMaker ML Aurora Database Athena Interactive analytics QuickSight BI Training SQL Select From Where Predictions NEW
  93. Amazon Kinesis Video Streams WebRTC Analytics and Media Services Stream

    live media with ultra-low latency and enable two-way interactivity for millions of camera devices Standards Compliant Exchange audio, video, and data between devices, mobile, and web apps for real-time two-way interactivity Fully Managed Fully managed WebRTC signaling, TURN, and STUN services with easy to use SDKs Real-time, Two-way Interactivity Compliant with web and mobile platforms for easy plug-in free playback Low Latency Live Media Streaming Peer-to-peer audio and video live streaming with sub-1 second latency for playback N E W !
  94. When to use which services Situation Solution Existing application •

    MySQL Amazon Aurora, RDS for MySQL • PostgreSQL Amazon Aurora, RDS for PostgreSQL • MariaDB Amazon Aurora, RDS for MariaDB • Oracle Use SCT to determine complexity Amazon Aurora, RDS for Oracle • SQL Server Use SCT to determine complexity Amazon Aurora, RDS for SQL Server • MongoDB Amazon DocumentDB • Cassandra Amazon Managed Apache Cassandra Service New application • If you can avoid relational features Amazon DynamoDB • If you need relational features Amazon Aurora In-memory store/cache • Amazon ElastiCache Time series data • Amazon Timestream Track every application change, crypto verifiable. Have a central trust authority • Amazon Quantum Ledger Database (QLDB) Don’t have a trusted central authority • Amazon Managed Blockchain Data Warehouse & BI • Amazon Redshift, Amazon Redshift Spectrum, and Amazon QuickSight Adhoc analysis of data in AWS or on-premises • Amazon Athena and Amazon QuickSight Apache Spark, Hadoop, HBase 
 (needle in a haystack type queries) • Amazon EMR Log analytics, operational monitoring, & search • Amazon Elasticsearch Service Real-time analytics • Amazon Kinesis and Amazon Managed Streaming for Kafka
  95. What else?

  96. NEW

  97. AWS IAM Access Analyzer An IAM capability to generate comprehensive

    findings if your resource policies grant public or cross-account access 
 Continuously identify resources with overly broad permissions Resolve findings by updating policies to protect your resources from unintended access before it occurs, or archive findings for intended access AWS Identity and Access Management Access Analyzer NEW
  98. Simplified Windows and SQL Server BYOL AWS License Manager now

    adds host management capabilities to simplify your ‘Bring your own license’ (BYOL) experience for software licenses, such as Windows and SQL Server, that require a dedicated physical server. NEW
  99. Amazon Braket Fully managed service that makes it easy for

    scientists and developers to explore and experiment with quantum computing. Quantum Technology Single environment to design, test, and run quantum algorithms Experiment with a variety of quantum hardware technologies Run hybrid quantum and classical algorithms Get Expert Help
  100. The Amazon Builders’ Library

  101. Go:

  102. Thank you! © 2019, Amazon Web Services, Inc. or its

    affiliates. All rights reserved. Halil BAHADIR - Manager, Solutions Architect at AWS Serhat CAN - Technical Evangelist at Atlassian