Slide 1

Slide 1 text

Built it? Now deliver it. What can API developers learn from high-performance web services? Presented by Owen Garrett Nginx, Inc.

Slide 2

Slide 2 text

What  do  we  do?  

Slide 3

Slide 3 text

Many of the world’s busiest websites are driven by NGINX…

Slide 4

Slide 4 text

What  is  NGINX?   Internet N Web  Server   Serve  content  from  disk   Applica9on  Gateway   FastCGI,  uWSGI,  Passenger…   Proxy   Caching,  Load  Balancing…   HTTP  traffic   þ Applica9on  Accelera9on   þ SSL  and  SPDY  termina9on   þ Performance  Monitoring   þ High  Availability   Advanced  Features:   þ Bandwidth  Management   þ Content-­‐based  Rou9ng   þ Request  Manipula9on   þ Response  Rewri9ng   þ Authen9ca9on   þ Video  Delivery   þ Mail  Proxy   þ GeoLoca9on  

Slide 5

Slide 5 text

HTTP  IS  A  SLOW  PROTOCOL…  

Slide 6

Slide 6 text

The  easy  way  to  handle  HTTP…   Client-­‐side:   Slow  network   Mul9ple  connec9ons   HTTP  Keepalives   Server-­‐side:   Limited  concurrency   Hundreds  of  concurrent   connec9ons…   require  hundreds  of  heavyweight   threads  or  processes…   compe9ng  for  limited     CPU  and  memory  

Slide 7

Slide 7 text

NGINX  architecture…  we  do  it  the  hard  way.   Hundreds  of  concurrent   connec9ons…   handed  by  a  small  number  of   mul9plexing  processes,…   typically  one  process   per  core  

Slide 8

Slide 8 text

NGINX  transforms  applica9on  performance   – from  worst-­‐case  traffic  to  best-­‐case   Internet N Slow,  high-­‐concurrency   internet-­‐side  traffic   Fast,  efficient   local-­‐side  traffic  

Slide 9

Slide 9 text

Roadmap  for  API  deployment   •  Use  NGINX  to  proxy  traffic   •  Define  your  entry  points  and  blacklist  the  rest   •  Define  your  access  control   •  Centralize  SSL   •  Apply  rate  and  concurrency  limits   •  Monitor,  scale,  cache  and  compress  

Slide 10

Slide 10 text

1.  Put  NGINX  in  front:  it’s  as  simple  as:   server {! listen 80;! ! location / {! proxy_pass http://backend;! }! }! ! upstream backend {! server webserver1:80;! server webserver2:80; ! server webserver3:80;! server webserver4:80;! }! Internet N

Slide 11

Slide 11 text

2.  Define  your  entry  points  and  blacklist  the  rest   server {! listen 80 default_server;! return 444;! }! ! server {! listen 80;! server_name api.example.com;! location / {! return 444;! }! location /api/v1 {! proxy_pass http://backends;! }! }!

Slide 12

Slide 12 text

3.  Define  your  edge  access  control   server {! listen 80;! server_name api.example.com;! ! location /api/v1 {! proxy_pass http://backends;! ! allow 192.168.1.0/24;! deny all; ! ! auth_request /auth;! }! }!

Slide 13

Slide 13 text

4.  Centralize  SSL  and  PKI   Internet N HTTPS  traffic  

Slide 14

Slide 14 text

5.  Rate-­‐limit  abusive  consumers   limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;! ! location /search/ {! limit_req zone=one burst=5;! }!

Slide 15

Slide 15 text

6.  Monitor,  Scale,  Cache  and  Compress   •  Use  NGINX’  core  capabili9es…   – Load  balancing,  persistence,  health  monitoring   – Response  compression   – Status  monitoring     and  logging   – Response  caching  

Slide 16

Slide 16 text

Turn  it  up  to  11…   Modules…   Partners…   NGINX  F/OSS  community:        nginx.org   NGINX  Enterprise  and  Support:  nginx.com