$30 off During Our Annual Pro Sale. View Details »

MSA(Microservices Architecture):Harder, Better,...

MSA(Microservices Architecture):Harder, Better, Faster, Stronger

LINE DEVDAY 2021

November 11, 2021
Tweet

More Decks by LINE DEVDAY 2021

Other Decks in Technology

Transcript

  1. Agenda - MSA on “SeriesOn” service - Ways to quickly

    build using Armeria - Why is it better using GraphQL - How to monitor the service? - Summary
  2. MSA on SeriesOn Motivation FRONT API SERVICE APP WEB TV

    EXTERNAL APP SERVICE WEB SERVICE TV SERVICE EXTERNAL SERVICE AGGREGATION SERVICE META AUTH PAYMENT ETC (Components) CLIENTS AGGREGATION SERVICE BACKEND MICROSERVICE
  3. MSA on SeriesOn Motivation - Backend services does not directly

    called by client. - The role of aggregating each data after checking what data is required through front API services Aggregation services Backend services - Server components responsible for multiple business logic - Such as authorization, contents playing, payment and movie meta etc .. Front API services - Manage the client-specific authentication and API invocation
  4. SeriesOn MSA Important to consider while implementing components - gRPC

    + Protobuf , Cache … Light and fast Generate API auto-documentation - Swagger … Support asynchronous - Reactive, Coroutine … Support distributed tracing - Pinpoint, Zipkin …
  5. Ways to quickly build using Armeria Apply Armeria with gRPC

    Armeria (https://armeria.dev) - Powered by LINE (current version : 1.12.0) - Go-to microservice framework - Support gRPC, Thrift, Kotlin, Retrofit, Reactive Streams, Spring Boot and Dropwizard.
  6. Ways to quickly build using Armeria Apply Armeria with gRPC

    Armeria (https://armeria.dev) - Support HTTP/2 - Integration with gRPC and Thrift - gRPC-over-HTTP/1 & 2 - Thrift-over-HTTP/1 & 2 - gRPC-Web - Essential features for building microservices - Metrics - Circuit breaker - Distributed call tracing via Zipkin - Completely asynchronous and reactive - Even higher performance on Linux
  7. Ways to quickly build using Armeria Armeria with gRPC strength

    Enables automatic documentation - Can be called directly from the web without using a separate gRPC testing tool (e.g. bloomRPC) - Most frequently used features ! Unlike the native gRPC server, it supports various mime-types with a single port. - application/grpc+proto - application/grpc-web+json - application/grpc-web+proto - application/grpc-web-text+proto - application/json; charset=utf-8; protocol=gRPC - application/protobuf; protocol=gRPC
  8. Ways to quickly build using Armeria Armeria with gRPC strength

    – Auto documentation Example protobuf e.g. LibraryService Auto documentation Automatic protobuf-based documentation and testing like Swaggers. Documenting time is greatly reduced
  9. Ways to quickly build using Armeria Armeria with gRPC strength

    – gRPC API Testing Human readable testing Enable to test gRPC using json body.
  10. Ways to quickly build using Armeria Apply boilerplate within the

    team Add Title 40pt - Add your Text here / Arial, normal, 35pt - Add your Text here / Arial, normal, 35pt Add Title 40pt Add Title 40pt - Set of common frameworks and library dependencies. - Initialize project using a single-line command - Helm charts for Kubernetes - Gradle based project A set of common rules within the team auth-service payment-service meta-service
  11. Ways to quickly build using Armeria Apply boilerplate within the

    team Helm chart structure Gradle6 based project structure Logstash App Filebeat Zipkin Brave SpringBoot2 gRPC + Protobuf Nelo2 Helm chart + Ingress Aremria
  12. Ways to quickly build using Armeria Summary - Templates of

    component specifications within the team - No more confusion on project settings - Building up knowledge on Boilerplate can reduce related errors later on. Applying boilerplate within the team Applying Armeria with gRPC - Most of the MSA’s concerns are included, so it can be developed quickly - Possible to focus on business logics - Can develop REST API as well as gRPC - +@ : Useful DocService function
  13. Why is it better? Backend service aggregation - motivation -

    E.g. Team A (APP) : “We need meta information in API A” - E.g. Team B (TV) : “Please include coupon information in API B” - E.g. Team C (Player) : “In the C API … .. lol” GraphQL is the best consideration for clients (APP, PC, MOBILE, and TV) Apply GraphQL for requirement implementation faster - It is always a controversial part of service development. Considered the ways to make the API faster, and easier to use - REST API + HATEOAS vs GraphQL
  14. Why is it better? Backend service aggregation – generic GraphQL

    services - Each client can call the required schema using GraphQL query Strength Weakness - Hard to track and debug requests from clients APP WEB TV GQL query A GQL query B GQL query C GraphQL
  15. Why is it better? Backend service aggregation – Apply the

    front API service layer Manage the GraphQL queries and support the authentication. APP WEB TV EXTERNAL APP SERVICE WEB SERVICE TV SERVICE EXTERNAL SERVICE GraphQL META AUTH PAYMENT ETC ETC ETC Uses GraphQL as a microservices aggregator GQL query
  16. Why is it better? Backend service aggregation – Apply the

    front API service layer E.g. API-Request of the purchased list on TV TV TV SERVICE GraphQL META AUTH PAYMENT ETC ETC ETC /v1/libraries GQL query Meta Library, Auth libraryGroups { … rightContext { deviceUseContext { … } }, product { title } }
  17. Why is it better? Backend service aggregation – Apply the

    front API service layer E.g. API-Request of the purchased list on TV
  18. Why is it better? Backend service aggregation – Summary -

    Possible for versioning GraphQL query and history management for each client - Both of the strengths of GraphQL and the Rest API can be utilized (hybrid). Apply to the front API service layer Use GraphQL as a microservices aggregator - Get closer to the client team (?) - Elegantly solve N+1 problems using the graphql batch dataLoader - The more dataLoader for the model, the faster speed in development - Since GraphQL is flexible, it is necessary to design the initial schema well
  19. How to monitor? motivation - ELK (Elasticsearch, Logstash, Kibana) based

    logging on Kubernetes Components access logs Distributed tracing & monitoring - Zipkin with ELK (Elasticsearch, Logstash, Kibana) Exception tracing & alarm - Nelo2
  20. How to monitor? Microservices monitoring - gRPC access logs When

    pods are deployed on Kubernetes, that bundle app container with Filebeat and Logstash Ingress Armeria APP Filebeat Logstash Elasticsearch
  21. How to monitor? Microservices monitoring - using Zipkin and Elasticsearch

    - Provides Zipkin Tracing Integration from Armeria - Easily connects to existing legacy Springboot components (using Sleuth, Brave) - Support storages : Cassandra, MySQL, Elasticsearch - Can be easily setup using zipkin-server, zipkin-ui and zipkin-dependency - Combination of Zipkin and Elasticsearch is considered the most flexible Zipkin (https://zipkin.io)
  22. How to monitor? Microservices monitoring - using Zipkin and Elasticsearch

    Filter by requested service, spanName, and tagQuery
  23. How to monitor? Microservices monitoring - using Zipkin and Elasticsearch

    Trace which GraphQL query is called from which services
  24. How to monitor? Microservices monitoring - using Zipkin and Elasticsearch

    Both Kibana dashboard setup and alert are possible based on data stored in Elasticsearch
  25. How to monitor? Microservices monitoring - Distributed tracing on asynchronous

    components - Request from the same thread are not guaranteed. - (Important) Should try not to lose the TraceContext in the same request. In an asynchronous environment (Reactive, Coroutine)
  26. How to monitor? Microservices monitoring - Distributed tracing on asynchronous

    components E.g. Case in which TraceContext was delivered normally TV SERVICE GraphQL META AUTH CONFIG (Thread-1) Parent spanA + spanB (Thread-2) Parent spanA + spanC (Thread-3) Parent spanA + spanD /v1/libraries - spanA spanA + spanB spanA + spanC spanA + spanD Traceable /v1/libraries spanA
  27. How to monitor? Microservices monitoring - Distributed tracing on asynchronous

    components E.g. Case in which TraceContext was delivered abnormally (losing TraceContext) TV SERVICE GraphQL META AUTH CONFIG (Thread-1) spanB (losing spanA) /v1/libraries spanA (Thread-2) spanC (losing spanA) (Thread-3) spanD (losing spanA) /v1/libraries - spanA spanB spanC spanC Untraceable – Each request is recognized as one request
  28. How to monitor? Microservices monitoring - Distributed tracing on asynchronous

    components - When converting to Mono or Flux, need to pass TraceContext if running in different ThreadPool - Or pre-specify with subscribeOn so that it can be executed in one Thread In reactive
  29. How to monitor? Microservices monitoring - Distributed tracing on asynchronous

    components - Like reactive, this can be solved by injecting a CoroutineContext - Armeria will support coroutine context injection in version 1.12.0 In coroutine
  30. Summary final microservices architecture APP SERVICE TV SERVICE PC SERVICE

    Zipkin API Elasticsearch Logstash GraphQL GraphQL query Zipkin SPAN LOG API GATEWAY Kibana DASHBOARD VISUALIZATION Filebeat gRPC/HTTP Zipkin DEPENDENCY NELO2 ERROR LOG FRONT API LAYER AGG LAYER BACKEND LAYER GRPC ACCESS LOG
  31. Faster project settings & development Better enhanced gRPC usability Summary

    with Armeria, GraphQL, gRPC and Zipkin More flexible aggregation Stronger support distributed tracing
  32. Reference gRPC weaknesses - https://docs.microsoft.com/en-us/aspnet/core/grpc/comparison?view=aspnetcore-5.0 Armeria architecture - https://armeria.dev -

    https://engineering.linecorp.com/ko/blog/reactive-streams-with-armeria-2/ - https://deview.kr/2019/schedule/283