Upgrade to Pro — share decks privately, control downloads, hide ads and more …

NGINX for API Developers

NGINX for API Developers

Built it? Now deliver it! Shared at APIStrat, Amsterdam, this presentation describes some measures you may like to take to ensure successful, robust delivery of your APIs.


March 27, 2014

More Decks by NGINX Inc

Other Decks in Technology


  1. Built it? Now deliver it. What can API developers learn

    from high-performance web services? Presented by Owen Garrett Nginx, Inc.
  2. What  is  NGINX?   Internet N Web  Server   Serve

     content  from  disk   Applica9on  Gateway   FastCGI,  uWSGI,  Passenger…   Proxy   Caching,  Load  Balancing…   HTTP  traffic   þ Applica9on  Accelera9on   þ SSL  and  SPDY  termina9on   þ Performance  Monitoring   þ High  Availability   Advanced  Features:   þ Bandwidth  Management   þ Content-­‐based  Rou9ng   þ Request  Manipula9on   þ Response  Rewri9ng   þ Authen9ca9on   þ Video  Delivery   þ Mail  Proxy   þ GeoLoca9on  
  3. The  easy  way  to  handle  HTTP…   Client-­‐side:   Slow

     network   Mul9ple  connec9ons   HTTP  Keepalives   Server-­‐side:   Limited  concurrency   Hundreds  of  concurrent   connec9ons…   require  hundreds  of  heavyweight   threads  or  processes…   compe9ng  for  limited     CPU  and  memory  
  4. NGINX  architecture…  we  do  it  the  hard  way.   Hundreds

     of  concurrent   connec9ons…   handed  by  a  small  number  of   mul9plexing  processes,…   typically  one  process   per  core  
  5. NGINX  transforms  applica9on  performance   – from  worst-­‐case  traffic  to  best-­‐case

      Internet N Slow,  high-­‐concurrency   internet-­‐side  traffic   Fast,  efficient   local-­‐side  traffic  
  6. Roadmap  for  API  deployment   •  Use  NGINX  to  proxy

     traffic   •  Define  your  entry  points  and  blacklist  the  rest   •  Define  your  access  control   •  Centralize  SSL   •  Apply  rate  and  concurrency  limits   •  Monitor,  scale,  cache  and  compress  
  7. 1.  Put  NGINX  in  front:  it’s  as  simple  as:  

    server {! listen 80;! ! location / {! proxy_pass http://backend;! }! }! ! upstream backend {! server webserver1:80;! server webserver2:80; ! server webserver3:80;! server webserver4:80;! }! Internet N
  8. 2.  Define  your  entry  points  and  blacklist  the  rest  

    server {! listen 80 default_server;! return 444;! }! ! server {! listen 80;! server_name api.example.com;! location / {! return 444;! }! location /api/v1 {! proxy_pass http://backends;! }! }!
  9. 3.  Define  your  edge  access  control   server {! listen

    80;! server_name api.example.com;! ! location /api/v1 {! proxy_pass http://backends;! ! allow;! deny all; ! ! auth_request /auth;! }! }!
  10. 6.  Monitor,  Scale,  Cache  and  Compress   •  Use  NGINX’

     core  capabili9es…   – Load  balancing,  persistence,  health  monitoring   – Response  compression   – Status  monitoring     and  logging   – Response  caching  
  11. Turn  it  up  to  11…   Modules…   Partners…  

    NGINX  F/OSS  community:        nginx.org   NGINX  Enterprise  and  Support:  nginx.com