Slide 1

Slide 1 text

Two weeks Faculty Development Program By: Sankalp Sandeep Paranjpe

Slide 2

Slide 2 text

DevSecOps Engineer @Intangles lab MIT ADTU Alumini (2024 passout) Ex-AWS Cloud Captain Cloud Security, Application Security Enthusiast 2x AWS Certified EC Council CEH Certified WHOAMI Sankalp Sandeep Paranjpe

Slide 3

Slide 3 text

AGENDA Introduction Principles for solutions architecture design Microservices architectures on AWS Event-Driven Architectures Scaling Strategies Load Balancing and CDN with AWS CloudFront

Slide 4

Slide 4 text

Where are we? Introduction

Slide 5

Slide 5 text

Where are we? What does a solution architect do? Introduction

Slide 6

Slide 6 text

EC2 Instances S3 buckets Cloudfront Route53 Load balancers CloudWatch Eventbridge SNS SQS Introduction

Slide 7

Slide 7 text

Building Scalable architecture design Building a highly available and resilient architecture Design for performance Creating immutable architecture Think loose, not server Think data-driven design Add security everywhere Building future proof extendable architecture Chapter 2: Solution Architect’s Handbook by Saurabh Shrivastava, Neeti Srivastava Principles of Solution Architecture Design

Slide 8

Slide 8 text

Microservices Microservices are an architectural and organizational approach to software development where software is composed of small independent services that communicate over well-defined APIs. These services are owned by small, self-contained teams.

Slide 9

Slide 9 text

No content

Slide 10

Slide 10 text

Advantages with Microservices Independent Development Smaller Codebases Team Autonomy Technology Agnosticism Faster Feedback Loops Continuous Deployment Rollback Capabilities Scalability Service Discovery and Load Balancing

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

1. Single Responsibility Principle (SRP): Definition: Each microservice should focus on a single responsibility or business capability. Example: A "Payment Service" handles all payment-related functions but doesn’t manage user data. Why it Matters: Simplifies service logic, improves maintainability, and reduces dependencies. AWS Service Alignment: AWS Lambda functions can be designed around SRP, keeping each function small and focused.

Slide 13

Slide 13 text

2. Loose Coupling: Definition: Services should be decoupled, communicating through APIs rather than direct connections. Example: A “User Service” communicates with a “Billing Service” through RESTful APIs rather than sharing a database. Why it Matters: Reduces dependencies, allows independent updates and deployments. AWS Service Alignment: API Gateway is often used to create decoupled microservices that communicate over HTTP.

Slide 14

Slide 14 text

3. Decentralized Data Management: Definition: Each service owns its data, ensuring that services don't share a single database. Example: The "Order Service" manages order data independently from the "Product Service." Why it Matters: Prevents tight coupling at the database level, allows independent scaling and updates. AWS Service Alignment: Use Amazon DynamoDB or RDS for individual service databases.

Slide 15

Slide 15 text

4. Stateless Services: Definition: Services should be stateless, meaning no session data is stored locally between requests. Example: Each API call to the "Auth Service" is independent, without reliance on previous interactions. Why it Matters: Stateless services scale more easily because instances don’t need to share state. AWS Service Alignment: AWS Lambda functions are inherently stateless and ideal for this architecture.

Slide 16

Slide 16 text

5) Independent Scaling: Definition: Each service should scale independently based on its own resource needs and traffic patterns. Example: The "Search Service" scales up to handle increased search requests without affecting other services. Why it Matters: Optimizes resource usage and reduces costs by scaling only the necessary services. AWS Service Alignment: Amazon ECS, EKS, or Lambda Auto Scaling can scale individual services.

Slide 17

Slide 17 text

Event Driven architectures in AWS Event-driven architectures (EDA) in AWS are a design pattern where actions within a system are triggered by events, rather than by direct service-to-service communication. AWS provides various services that facilitate the implementation of event-driven architectures, helping to decouple systems, increase scalability, and improve flexibility.

Slide 18

Slide 18 text

Key Concepts of Event-Driven Architecture Events: These are the occurrences or changes in state, such as a file being uploaded to an S3 bucket, a user signing up, or a message arriving in a queue. Event Producers: Services or components that generate events, such as Amazon S3, Amazon DynamoDB, or custom applications. Event Consumers: Services that listen for and respond to events. These can be AWS Lambda functions, Amazon SNS, SQS, etc. Event Brokers: These manage the routing and delivery of events from producers to consumers, like Amazon EventBridge or SNS.

Slide 19

Slide 19 text

Event Driven Architectures in AWS

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

Use Case : Sending notification to a user for CPU usage more than 80%

Slide 22

Slide 22 text

Use Case : GuardDuty finding generated and based on the fining specific action is taken.

Slide 23

Slide 23 text

Scaling Stratagies

Slide 24

Slide 24 text

Scaling Stratagies

Slide 25

Slide 25 text

When to scale? 1) Increase in Traffic/Demand: When your web traffic or user activity increases and your current resources can't handle the load, leading to slow performance or downtime. 2) Cost Optimization: To avoid over-provisioning or underutilization, scaling helps in aligning resources with current demand to optimize costs. 3) Performance Degradation: If you notice longer response times, high CPU or memory usage, or 4) frequent timeouts, it could be time to scale. 4) Event-based Workloads: During scheduled events (e.g., marketing campaigns, product launches) that are expected to drive traffic spikes. 5) Global Expansion: If you're serving users across different regions and need to scale infrastructure to provide low-latency services globally.

Slide 26

Slide 26 text

1. Auto Scaling Auto Scaling Groups (ASG): Automatically adjusts the number of EC2 instances based on traffic patterns and predefined conditions. When to use: If your application runs on EC2 and traffic is highly variable. How to set up: Define scaling policies (target tracking, step scaling, or scheduled scaling). 1. Set scaling triggers based on metrics like CPU utilization or memory usage. 2.

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

2. Elastic Load Balancing (ELB) ELB distributes traffic across multiple instances and automatically scales up to handle increases in traffic. When to use: If your application needs to distribute traffic across multiple EC2 instances for high availability. How to set up: Add EC2 instances to a target group, and ELB will automatically distribute incoming traffic.

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

3. Amazon Elastic Container Service (ECS) / EKS (Kubernetes) Both services scale containerized workloads across clusters of EC2 or Fargate instances. When to use: When running microservices or containerized applications. How to set up: Enable Service Auto Scaling for ECS tasks or Kubernetes Pods. Define scaling policies based on CPU, memory, or custom CloudWatch metrics.

Slide 32

Slide 32 text

4. AWS Lambda (Serverless) AWS Lambda automatically scales up by increasing the number of function instances in response to events, with no manual intervention. When to use: For stateless, event-driven workloads (e.g., API requests, stream processing). How to set up: Simply deploy the function, and AWS handles scaling based on the number of incoming events.

Slide 33

Slide 33 text

Best Practices for Scaling Set Proper Metrics and Thresholds: Use CloudWatch to monitor key performance metrics (CPU, memory, request rates) and set thresholds for triggering scaling actions. Use Elastic Load Balancing: Combine ELB with Auto Scaling to ensure high availability. Design for Fault Tolerance: Spread your infrastructure across multiple Availability Zones or regions to ensure resilience. Automate as Much as Possible: Rely on AWS services like Auto Scaling and Lambda, which handle scaling automatically, reducing operational overhead.

Slide 34

Slide 34 text

3 tier Layered Architecture

Slide 35

Slide 35 text

Multi-tenant SaaS-based Architecture

Slide 36

Slide 36 text

RESTful-architectural-based ecommerce website

Slide 37

Slide 37 text

AWS Credits AWS Proof of Concept Program AWS Sponsored Hackathons AWS Community Programs AWS Cloud Credits for Research. AWS Credits for Startups AWS Non-profit Credits Program

Slide 38

Slide 38 text

References AWS Official Documentation AWS Events AWS Website Google

Slide 39

Slide 39 text

Let's Connect on LinkedIn: