Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Ryan Anguiano - Dr. Microservices, Or How I Learned to Stop Worrying and Love the API

Ryan Anguiano - Dr. Microservices, Or How I Learned to Stop Worrying and Love the API

Assuming that you already know how to build a monolithic app, you must be wondering how you can use all this "microservice" stuff that you keep hearing about. Well, a good word of advice is that you probably don't need it. If designed properly, a monolithic app should be able to scale and fit the needs of most businesses. Even so, you should keep your development as simple as possible until you have proven and solidified your business concepts. But if you do need to grow to Internet scale, then you have a long road ahead of you.

Moving from a monolithic application to microservices is a natural evolution that is often of necessity. There are several competing schools of thought that are still being battle-tested in these early days of microservice architecture. Among all the competing paradigms, most of the requirements can be agreed upon, but are mostly differentiated by the tools used to fulfill the requirements.

This talk will cover setting up the required infrastructure, and demonstrate how to migrate a sample monolithic Django application into a microservices platform.

The demo application will use the following technologies: Django, Flask, Fabric, Terraform, Ansible, CentOS, Docker, Mesos, Consul, Nginx, Pgbouncer, Kafka

https://us.pycon.org/2017/schedule/presentation/356/

PyCon 2017

May 21, 2017
Tweet

More Decks by PyCon 2017

Other Decks in Programming

Transcript

  1. DR. MICROSERVICES Or How I Learned to Stop Worrying and

    Love the API PyCon 2017 Ryan Anguiano
  2. WHERE TO START • Don't start a new project with

    microservices • Hard to pivot from business standpoint • Core of project should be rigidly defined • Switch to microservices because you need to, 
 not because you want to
  3. THE MONOLITH • Very large Django app • Old Python

    • Dependency Hell (80+ lines requirements.txt) • Deployment interrupts entire app
  4. BREAKING UP THE MONOLITH • Analyze your data flow •

    Divide application into logical services • Leave complicated business logic intact • Prioritize obvious services (accounts, location, etc)
  5. CORE LOGIC A CORE LOGIC B CORE LOGIC C BILLING

    ACCOUNTS LOCATION EMAIL CORE LOGIC D MESSAGING
  6. CORE LOGIC A CORE LOGIC B CORE LOGIC C BILLING

    ACCOUNTS LOCATION EMAIL CORE LOGIC D MESSAGING
  7. CORE LOGIC A CORE LOGIC B CORE LOGIC C CORE

    LOGIC D ACCOUNTS BILLING LOCATION EMAIL MESSAGING
  8. CORE LOGIC A CORE LOGIC B CORE LOGIC C CORE

    LOGIC D ACCOUNTS BILLING LOCATION EMAIL MESSAGING
  9. BUILDING A MIGRATION ROADMAP • Evaluate different tools • Use

    solutions that best meet your needs • You can keep your existing project running along side new services
  10. SERVICES AND API DESIGN • Standardize two methods of communication

    • Synchronous • Asynchronous - HTTP REST - Kafka Messages
  11. SERVICES AND API DESIGN • The Twelve-Factor App (https://12factor.net) •

    Declarative configuration • Environment portability • Use as guide, not dogma
  12. SERVICES AND API DESIGN • Do not make breaking changes

    to API endpoints • Increment endpoint version or add new endpoint • Always test every older endpoint version
  13. SEPARATING DATA STORES • Every database should only be accessed

    by a single service • If a database needs to be shared, wrap it in a REST endpoint
  14. MICROSERVICES TOOLSET CREATION • On Django, add a backend_api to

    request objects • def my_view(request):
 user = request.backend_api.get(
 ('accounts', 'user')
 )
 return {'user_id': user['id']}
  15. DEVOPS AND INFRASTRUCTURE DESIGN • Use Docker to develop in

    the same environment as production • Use Terraform and Ansible for configuration-based infrastructure • Use Continuous Integration to automate deployments
  16. CENTRALIZED DATA PIPELINE • Apache Kafka™ • Confluent Open Source

    Platform • All data has a schema • Producers, Consumers and Connectors
  17. LOGGING AND ANALYTICS • Send all logs into data pipeline

    • Send all Docker logs into Kafka using logspout • Send all syslog into Kafka using Kafka Connect
  18. LOGGING AND ANALYTICS • Generate Correlation IDs (CIDs) on all

    initial actions • Use CIDs in logs and all communications between services • CIDs can be used to trace issues across entire distributed system