Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Let's talk about Serverless

Let's talk about Serverless

Slides from a Serverless focused presentation, prepared for a meeting with an audience of senior Technology Directors and Managers. Provides a high level view of the Biz, Dev, and Ops opportunities and challenges of Serverless Computing, as of July 2017.
*** Original included gifs ***

Dimitri Koutsos

July 18, 2017
Tweet

Other Decks in Technology

Transcript

  1. The name is causing a lot of confusion and debate

    Different people have different opinions, naturally
  2. “It’s not really ‘serverless’ but more ‘I don’t care what

    a server is’” Simon Wardley But we all use it meaning more or less the same thing: “Cloud computing could have been called datacenterless” Adrian cockcroft
  3. The promise of Serverless • develop fast • deploy fast

    • focus only on your business logic • don't worry about scalability, fault tolerance etc • pay only for what you actually used If you think you’ve heard it before you are right. That’s not a bad thing
  4. Serverless carries on the trend towards higher levels of abstractions

    in cloud programming models. It’s currently exemplified by the Function-as-a-Service (FaaS) model, that is built on the idea of dynamically allocating resources for an event driven function execution.
  5. Serverless is evolution not revolution. The FaaS model represents the

    next step of the Microservices & Cloud Native architectures towards abstraction of the lower layers of the stack. Developers write small stateless code snippets and allow the platform to manage the complexities of executing the function in a scalable and fault-tolerant manner. Serverless Platform Architecture IBM OpenWhisk Architecture The service must manage a set of user defined functions, take an event sent over HTTP or received from an event source, determine which function(s) to which to dispatch the event, find an existing instance of the function or create a new instance, send the event to the function instance, wait for a response, gather execution logs, make the response available to the user, and stop the function when it is no longer needed. The challenge is to implement such functionality while considering metrics such as cost, scalability, and fault tolerance. The platform must quickly and efficiently start a function and process its input. The platform also needs to queue events, and based on the state of the queues and arrival rate of events, schedule the execution of functions, and manage stopping and deallocating resources for idle function instances. In addition, the platform needs to carefully consider how to scale and manage failures in a cloud environment.
  6. PaaS Paas does share a lot with Serverless, such as

    no provisioning of machines and autoscaling. But the unit of computation is much more granular in the latter. Serverless computing is job-oriented rather than application-oriented. Most PaaS applications are not geared towards bringing entire applications up and down for every request, whereas FaaS platforms can do exactly this. Microservices They are not a cloud technology but a software application architecture pattern. Microservices can be implemented with functions and serverless ecosystem offerings. Containers They are typically the basic building blocks used by serverless, offering providers to enable rapid provisioning functions and isolation. Jeremy Edberg (former chief reliability engineer for Netflix and now co-founder and chief product officer at CloudNative.io) "Containers will be a fad that will quickly disappear. People doing containers will just move on to creating serverless applications."
  7. Changes the Economics (again) Serverless could change the economy of

    computing. As developers take advantage of smaller granularities of computational units and pay only for what is actually used, will that change how developers and businesses think about solutions.
  8. It could lead to Worth Based Development a.k.a FinDev •

    Map the business functions to code • Quantify all parts of the business model • Identify most costly parts and improve • Reuse functions (marketplace opportunity) and decide where to use COTS and where to invest in DEV http://blog.gardeviance.org/2016/11/why-fuss-about-serverless.html
  9. It’s disruptive! “A new generation of vendors will make current

    SaaS vendors lose. The new vendor’s will attack current SaaS vendors on where it hurts most. The pricing model, the efficiencies and the business model. How will the new upstarts do it? They will abandon VM based model. They will go 100% server less. They will offer true usage based pricing. They will appeal to customers that want to use without monthly commitments. They will charge lot less than current SaaS vendors. They will scale much faster and effectively. They will not have weekly maintenance windows.”
  10. Vendor Lock In • Due to the limited and stateless

    nature of serverless functions, an ecosystem of scalable services that support the different functionalities a developer may require is essential to having a successfully deployed serverless application. • For example, many applications will require the serverless function to retrieve state from permanent storage (such as a file server or database). • There may be an existing ecosystem of functions that support API calls to various storage systems. • While the functions themselves may scale due to the serverless guarantees, the underlying storage system itself must provide reliability and QoS guarantees to ensure smooth operation. • Serverless functions can be used to coordinate any number of systems such as identity providers, messaging queues, and cloud-based storage. • Dealing with the challenges of scaling of these systems on-demand is as critical but outside the control of the serverless platform. • To increase the adoption of serverless computing there is a need to provide such scalable services. • Such an ecosystem enables ease of integration and fast deployment at the expense of vendor lock-in • That may be changing as open source solutions may work well across multiple cloud platforms.
  11. Security Concerns It turns out that serverless, by the very

    fact of it’s convenience and low cost model, may lead to laziness, and poor security. With serverless there is zero cost to deploying functions, so you deploy everything. Make sure you know what is disposable, and dispose of it.
  12. • Only certain languages supported • Integration Testing is a

    pain! • Immature Hacky Frameworks • Operational Challenges • etc • etc However: More legitime concerns
  13. DevOps will eventually become the new legacy (see ITIL) 1.

    Focus on Value 2. Design for Experience 3. Start Where You Are 4. Work Holistically 5. Progress Iteratively 6. Observe Directly 7. Be Transparent 8. Collaborate 9. Keep It Simple The Serverless & FaaS wave will come up with its own memes But Serverless does not mean NoOps ‘Ops’ means a lot more than server administration It also means at least monitoring, deployment, security, networking and often also means some amount of production debugging and overall system scaling. These problems all still exist with Serverless apps and you’re still going to need a strategy to deal with them. In some ways Ops is harder in a Serverless world because a lot of this is so new. Lambda is the new bash! - Adrian Cockcroft What’s next for Ops?
  14. Bursty & Spikey workloads fare well because the developer offloads

    the elasticity of the function to the platform. Just as important, the function can scale to zero, so there is no cost to the consumer when the system is idle making it ideal for infrequently used parts of a system. Infrastructure and glue tasks, such as reacting to an event triggered from cloud storage or a database Mobile and IoT apps to process events, such as user check-in or aggregation functions Image processing, for example to create preview versions of an image or extract key frames from a video Data processing, like simple extract, transform, load (ETL)pipelines to preprocess datasets
  15. Serverless functions have limited expressiveness as they are built to

    scale. To maximize scaling, serverless functions do not maintain state between executions. Instead,the developer can write code in the function to retrieve and update any needed state. This has a huge impact on application architecture, albeit not a unique one - the ‘Twelve-Factor App’ concept has precisely the same restriction. Stateful services are best implemented outside of serverless functions. Integration points with other platform services such as databases, message queues, or storage are therefore extremely important, but lead to “vendor lock-in” A potential future solution would be low latency access to out-of-process cache with very low network overhead e.g. access on redis/elasticache with with some sort of placement group set up. It’s likely that the technological advancements that brought us distributed, systems and microservices, will carry on giving that that front, i.e. even faster that 25Gbit/s networks, binary communication protocols etc. Also likely to see new application architectures. For instance a regular server handling an initial request, gathering all the context necessary to process that request from it’s local and external state, then farming off a fully-contextualized request to FaaS functions that themselves don’t need to look up data externally. Stateless is the new Orange (which was the new black) Limits to the “infinite” scalability There are a variety of limits set on the runtime resource requirements of serverless code, including the number of concurrent requests, and the maximum memory and CPU resources available to a function invocation. Some limits may be increased when users’ need grow, such as the concurrent request threshold, while others are inherent to the platforms, such as the maximum memory size. With limits is across your AWS account. That means if someone, somewhere, in your organization does a new type of load test and starts trying to execute 1000 concurrent Lambda functions you’ll accidentally DoS your production applications. Oops.
  16. All the above are being addressed rapidly and constantly, in

    innovative ways by the vendors, the OSS community and academia Case in point: OpenLambda. “Serverless computing introduces many new research challenges in the areas of sandboxing, session management, load balancing, and databases. In order to facilitate work in these areas, we are building OpenLambda, an open-source serverless computing platform” Execution Duration FaaS functions are typically limited in how long each invocation is allowed to run. e.g. at present AWS Lambda functions are not allowed to run for longer than 5 minutes and if they do they will be terminated. Startup Latency At present how long it takes your FaaS function to respond to a request depends on a large number of factors, and may be anywhere from 10ms to 2 minutes. If your function is implemented in Javascript or Python and isn’t huge (i.e. less than a thousand lines of code) then the overhead of running it in should be no more than 10 - 100 ms. Bigger functions may occasionally see longer times. Improvements in container technologies and FaaS implementations will make those increasingly shorter.
  17. “Serverless" is here to stay. The concept will carry on

    evolving It’s attractive because it further removes undifferentiated heavy lifting in a world that's being eaten by software and all businesses have become technology businesses, whether they noticed it or not. It will have an impact in the way apps are developed and delivered It will lead to new kinds of programming models, application and platform architectures It will further evolve the meaning and practice of Operations and shift it even more to Ops as Code, Config management and API Driven tools and processes.
  18. As always, the future is already here, it’s just not

    evenly distributed But computers are not going away!