Upgrade to Pro — share decks privately, control downloads, hide ads and more …

CloudMaster: Accelerated learning for aspiring ...

CloudMaster: Accelerated learning for aspiring Cloud Architects- MIT SOC Faculty Development Program

Event Date: October 01, 2024

Sankalp Sandeep Paranjpe

January 12, 2025
Tweet

Transcript

  1. DevSecOps Engineer @Intangles lab MIT ADTU Alumini (2024 passout) Ex-AWS

    Cloud Captain Cloud Security, Application Security Enthusiast 2x AWS Certified EC Council CEH Certified WHOAMI Sankalp Sandeep Paranjpe
  2. AGENDA Introduction Principles for solutions architecture design Microservices architectures on

    AWS Event-Driven Architectures Scaling Strategies Load Balancing and CDN with AWS CloudFront
  3. Building Scalable architecture design Building a highly available and resilient

    architecture Design for performance Creating immutable architecture Think loose, not server Think data-driven design Add security everywhere Building future proof extendable architecture Chapter 2: Solution Architect’s Handbook by Saurabh Shrivastava, Neeti Srivastava Principles of Solution Architecture Design
  4. Microservices Microservices are an architectural and organizational approach to software

    development where software is composed of small independent services that communicate over well-defined APIs. These services are owned by small, self-contained teams.
  5. Advantages with Microservices Independent Development Smaller Codebases Team Autonomy Technology

    Agnosticism Faster Feedback Loops Continuous Deployment Rollback Capabilities Scalability Service Discovery and Load Balancing
  6. 1. Single Responsibility Principle (SRP): Definition: Each microservice should focus

    on a single responsibility or business capability. Example: A "Payment Service" handles all payment-related functions but doesn’t manage user data. Why it Matters: Simplifies service logic, improves maintainability, and reduces dependencies. AWS Service Alignment: AWS Lambda functions can be designed around SRP, keeping each function small and focused.
  7. 2. Loose Coupling: Definition: Services should be decoupled, communicating through

    APIs rather than direct connections. Example: A “User Service” communicates with a “Billing Service” through RESTful APIs rather than sharing a database. Why it Matters: Reduces dependencies, allows independent updates and deployments. AWS Service Alignment: API Gateway is often used to create decoupled microservices that communicate over HTTP.
  8. 3. Decentralized Data Management: Definition: Each service owns its data,

    ensuring that services don't share a single database. Example: The "Order Service" manages order data independently from the "Product Service." Why it Matters: Prevents tight coupling at the database level, allows independent scaling and updates. AWS Service Alignment: Use Amazon DynamoDB or RDS for individual service databases.
  9. 4. Stateless Services: Definition: Services should be stateless, meaning no

    session data is stored locally between requests. Example: Each API call to the "Auth Service" is independent, without reliance on previous interactions. Why it Matters: Stateless services scale more easily because instances don’t need to share state. AWS Service Alignment: AWS Lambda functions are inherently stateless and ideal for this architecture.
  10. 5) Independent Scaling: Definition: Each service should scale independently based

    on its own resource needs and traffic patterns. Example: The "Search Service" scales up to handle increased search requests without affecting other services. Why it Matters: Optimizes resource usage and reduces costs by scaling only the necessary services. AWS Service Alignment: Amazon ECS, EKS, or Lambda Auto Scaling can scale individual services.
  11. Event Driven architectures in AWS Event-driven architectures (EDA) in AWS

    are a design pattern where actions within a system are triggered by events, rather than by direct service-to-service communication. AWS provides various services that facilitate the implementation of event-driven architectures, helping to decouple systems, increase scalability, and improve flexibility.
  12. Key Concepts of Event-Driven Architecture Events: These are the occurrences

    or changes in state, such as a file being uploaded to an S3 bucket, a user signing up, or a message arriving in a queue. Event Producers: Services or components that generate events, such as Amazon S3, Amazon DynamoDB, or custom applications. Event Consumers: Services that listen for and respond to events. These can be AWS Lambda functions, Amazon SNS, SQS, etc. Event Brokers: These manage the routing and delivery of events from producers to consumers, like Amazon EventBridge or SNS.
  13. When to scale? 1) Increase in Traffic/Demand: When your web

    traffic or user activity increases and your current resources can't handle the load, leading to slow performance or downtime. 2) Cost Optimization: To avoid over-provisioning or underutilization, scaling helps in aligning resources with current demand to optimize costs. 3) Performance Degradation: If you notice longer response times, high CPU or memory usage, or 4) frequent timeouts, it could be time to scale. 4) Event-based Workloads: During scheduled events (e.g., marketing campaigns, product launches) that are expected to drive traffic spikes. 5) Global Expansion: If you're serving users across different regions and need to scale infrastructure to provide low-latency services globally.
  14. 1. Auto Scaling Auto Scaling Groups (ASG): Automatically adjusts the

    number of EC2 instances based on traffic patterns and predefined conditions. When to use: If your application runs on EC2 and traffic is highly variable. How to set up: Define scaling policies (target tracking, step scaling, or scheduled scaling). 1. Set scaling triggers based on metrics like CPU utilization or memory usage. 2.
  15. 2. Elastic Load Balancing (ELB) ELB distributes traffic across multiple

    instances and automatically scales up to handle increases in traffic. When to use: If your application needs to distribute traffic across multiple EC2 instances for high availability. How to set up: Add EC2 instances to a target group, and ELB will automatically distribute incoming traffic.
  16. 3. Amazon Elastic Container Service (ECS) / EKS (Kubernetes) Both

    services scale containerized workloads across clusters of EC2 or Fargate instances. When to use: When running microservices or containerized applications. How to set up: Enable Service Auto Scaling for ECS tasks or Kubernetes Pods. Define scaling policies based on CPU, memory, or custom CloudWatch metrics.
  17. 4. AWS Lambda (Serverless) AWS Lambda automatically scales up by

    increasing the number of function instances in response to events, with no manual intervention. When to use: For stateless, event-driven workloads (e.g., API requests, stream processing). How to set up: Simply deploy the function, and AWS handles scaling based on the number of incoming events.
  18. Best Practices for Scaling Set Proper Metrics and Thresholds: Use

    CloudWatch to monitor key performance metrics (CPU, memory, request rates) and set thresholds for triggering scaling actions. Use Elastic Load Balancing: Combine ELB with Auto Scaling to ensure high availability. Design for Fault Tolerance: Spread your infrastructure across multiple Availability Zones or regions to ensure resilience. Automate as Much as Possible: Rely on AWS services like Auto Scaling and Lambda, which handle scaling automatically, reducing operational overhead.
  19. AWS Credits AWS Proof of Concept Program AWS Sponsored Hackathons

    AWS Community Programs AWS Cloud Credits for Research. AWS Credits for Startups AWS Non-profit Credits Program