Upgrade to Pro — share decks privately, control downloads, hide ads and more …

APIStrat 2016 | On-prem support? That was so 1982 (Charlie Ozinga)

APIStrat 2016 | On-prem support? That was so 1982 (Charlie Ozinga)

A large population of businesses and major enterprises continue to use on-premise services for a variety of reasons, from Public sector compliance to security concerns, to just plain old, archaic systems that cost too much to do anything about. That doesn't mean we have to continue to support these services like we did back in the 1980s.

The challenge we will face in 2016: how do you make those on-prem services work in the new and emerging API Economy? We will dig into on-prem connector proxies, what one needs to do to consume an API architecture in an on-prem environment and how to support an app when things go down.

More Decks by API Strategy & Practice Conference

Other Decks in Technology

Transcript

  1. A long time ago... In an IT department far away,

    a company would own and maintain all of its own computer hardware, where it would install all the software it used. This was generally considered a terrible idea. Then came the Cloud, which promised to save time and effort just by adding “as a service” to everything...
  2. Everybody’s doing it ... ‘ The argument over whether to

    go cloud or not is over. Aside from a few businesses ... most companies now acknowledge that they will eventually be moving their applications to the cloud. ‘ ‘ If I could have simply deployed our software on the public cloud, ... I would have asked, “Where do I sign?” ‘ ‘ [moving to the cloud] is undoubtedly one of the biggest trends in the IT industry right now. ‘
  3. … or are they? ‘ Compute- and I/O-intensive big data

    workloads won't stray to the cloud yet as security and existing infrastructure keep analytics in the data center. ‘ ‘ Moving to the cloud should always be well evaluated and only done if it brings value ... not everything needs to be moved from On-Premises to the cloud. ‘
  4. Scorecard: On-Prem •  Security (transmit, storage) •  Compliance (HIPAA, PCI,

    internal) •  Integration (w/ other software or processes) •  Inertia (tech debt, things that work) •  Cost (to scale)
  5. Connecting: On-Prem (0) Nothing to connect to / is not

    reachable •  Intranet •  Firewall •  Desktop
  6. Usability •  Multiple services on the same on- prem installation.

    •  Automatable. •  Easy to install and run. •  Have to monitor and detail client connections. •  Easy to upgrade.
  7. Usability •  Multiple services on the same on- prem installation.

    •  Automatable. •  Easy to install and run. •  Have to monitor and detail client connections. •  Easy to upgrade. •  Multi-tenant client •  Client API ◦  Manage tenants ◦  Stop / Start / Register •  Multi-tenant (per Org) server •  Server API ◦  Total registered users ◦  API success / error count
  8. Client + Server API $ curl localhost:8101/counts/requests ← server API

    {"count":432} $ curl localhost:8100/tenants ← client API { "success": true, "tenants": [ { "registered": true, "registeredId": "4001", ... } ] }
  9. Security •  Data must be secure at all stages of

    transit. •  Each user must be isolated. •  Registration / handshake process must be resistant to attacks.
  10. Security •  Data must be secure at all stages of

    transit. •  Each user must be isolated. •  Registration / handshake process must be resistant to attacks. •  Use SSH w/ TLS to establish communication and API connection. •  Keys for identity and SSH are generated on the client, and only public part shared with server. •  HTTP(S) proxy with its own cert, backend verifies service cert.
  11. Scalability / Stability •  ~10k of open sockets. •  ~1k

    requests / sec •  Listen ports not immediately being reaped. •  Network instability •  Silently dropped connections.
  12. Scalability / Stability •  ~10k of open sockets. •  ~1k

    requests / sec •  Listen ports not immediately being reaped. •  Network instability •  Silently dropped connections. •  Automatic retries. •  Self-healing connection and process restart. •  Runs as a service. •  Periodic heartbeat / loopback call. •  Port number shifting. •  HA / Distributed server stack.
  13. Down the Road •  Events and notification ◦  Service /

    tunnel up/down ◦  Request failure notification •  Queueing (some) requests in the server •  Improved HA and scalability of server
  14. Conclusion: This section was only a single slide, so it

    doesn’t really need its own conclusion.