Serverless computing is a cloud execution model in which a cloud provider dynamically allocates—and then charges the user for—only the compute and storage resources required to execute a specific piece of code. There are still servers involved, but their provisioning and maintenance are handled entirely by the provider. From the perspective of the team writing and deploying the code, there are no servers to provision or manage. This includes nothing bare metal, virtual, or containerized—anything that requires you to manage a host, patch a host, or deal with anything on an operating system level is not something you should have to do in the serverless world.
When people talk about serverless computing or serverless architecture today, they are referring to function-as-a-service offerings, in which a customer writes code that only addresses business logic and uploads it to a provider. That provider handles all hardware provisioning, virtual machine and container management, and even multithreading, which is frequently built into application code.
Serverless functions are event-driven, which means the code is only executed when prompted by a request. Instead of a flat monthly fee for running a physical or virtual server, the supplier charges just for the compute time needed by that execution. These functions can be linked together to form a processing pipeline, or they can be used as components of a larger application, communicating with other code running in containers or on traditional servers.
Advantages of serverless computing
Two of the most significant advantages of serverless computing: developers can focus on the business goals of the code they write rather than on infrastructural issues; and organizations only pay for the compute resources they actually use in a very granular manner, rather than buying physical hardware or renting cloud instances that mostly sit idle. For example, you may have an application that is inactive most of the time but must handle a large number of event requests at once under certain conditions.
Alternatively, you may have an application that processes data supplied from IoT devices with sporadic or limited Internet connectivity. In both circumstances, the typical strategy would include deploying a large server capable of handling peak work capacities—but that server would be underutilized most of the time. You would only pay for the server resources that you really utilize with a serverless design. Serverless computing could also be useful for certain types of batch processing. A service that uploads and processes a sequence of individual picture files before sending them to another portion of the application is a classic example of a serverless architecture use case.
Disadvantages of serverless computing
The most obvious disadvantage of serverless services is that they are designed to be transient, and inappropriate for long-term tasks. With most serverless providers, your code can’t run for more than a few minutes, and when you re-run a function, it loses all of its stateful data. Another issue is that it can take several seconds for serverless code to start running. This isn’t a big deal in most situations, but it’s something to keep in mind if your application has strict latency requirements. The inability to switch providers frequently is a major drawback.
We’ll talk about why the major commercial cloud providers have cornered the market on serverless computing, despite the fact that open source alternatives are available. Because of this, many developers wind up adopting vendor-supplied tooling even if they’re not happy with it. Because serverless computing relies so much on the vendor’s infrastructure, it can be challenging to incorporate serverless code into internal development and testing processes.
Written and curated by: Amit K Jain, Enterprise Architect, Salesforce Certified Technical Architect (CTA), Deloitte Consulting., USA. You can follow him on LinkedIn