Serverless computing, often referred to as just "Serverless", is a cloud computing application development and execution model that enables developers to build and run application code without provisioning or managing backend infrastructure or servers.
Popular serverless platforms and services include AWS Lambda (Amazon Web Services), Azure Functions (Microsoft Azure), Google Cloud Functions (Google Cloud Platform), and various open source serverless frameworks like Apache OpenWhisk and Knative.
Serverless computing is beneficial for various use cases, including:
“Serverless” doesn't mean there isn’t a server, rather that you are abstracted from and don't have to worry about the server(s) processing your workloads, leaving you free to concentrate on your workloads.
Key characteristics of serverless computing include:
Knative is an open-source platform that extends Kubernetes to provide a set of building blocks for building, deploying, and managing serverless-style applications and container-based microservices. It abstracts away many of the underlying complexities of managing containers and allows developers to focus on writing code and defining functions while providing automation and scaling capabilities.
The Knative serverless environment allows organizations to deploy code to Kubernetes platforms, such as Red Hat OpenShift. With Knative, a developer can create a service by packaging their code as a container image and handing it to the system. Since that code will only run when it needs to, with Knative starting and stopping instances automatically, this can reduce operations costs if the organization is paying for cloud-based compute time only when it’s needed, rather than managing their own servers.
Function as a Service (FaaS) is a subset of serverless computing. Whilst both FaaS and Serverless are cloud computing models that abstract server management away from developers, allowing them to focus on writing code and deploying applications without worrying about the underlying infrastructure, there are some key differences between FaaS and the broader concepts of serverless:
In a FaaS model, applications are broken down into smaller, individual functions or microservices. Each function performs a specific task and can be triggered by events. FaaS platforms, such as AWS Lambda or Azure Functions, execute these functions in response to events, and you are billed based on the number of executions and the execution time of each function. Serverless computing encompasses a broader range of services and components beyond just functions, and can include services such as managed databases, authentication services, and more. Serverless applications may use FaaS for certain components but can also leverage other serverless services.
With FaaS, you don't have to worry about provisioning or managing servers. The cloud provider takes care of scaling the resource management associated with the infrastructure, automatically, based on the number of function executions. You normally only pay for the actual compute resources used during function execution in a FaaS model. Since Serverless encompasses a wider set of services, including databases, storage, and APIs, while you don't manage servers in either FaaS or serverless, serverless may also include more complex backend components like managed databases or storage, which are provisioned and managed by the cloud provider.
FaaS is particularly suited for event-driven, stateless functions. It's commonly used for tasks like image processing, data transformation, real-time data processing, and building microservices. Serverless can be (and often is) applied to a broader range of use cases, including web applications, mobile applications, IoT solutions, and more. Serverless can involve orchestrating multiple functions, services, and resources to build complete applications.
FaaS platforms typically charge based on the number of function executions and the duration of each execution. This pay-as-you-go model can be cost-effective for sporadic or bursty workloads. It may also be well suited to end-user PAYG (pay-as-you-go) models where there is a direct link between usage and billing. Serverless services often have a pricing model that combines resource usage (e.g., storage, data transfer) with event-driven function execution costs. The pricing structures can vary depending on the specific services used.
In summary, FaaS is a specific implementation of serverless computing that focuses on executing small, event-triggered functions, while serverless computing is a broader paradigm that encompasses a wide range of cloud services, including FaaS, managed databases, storage, and more.
Adopting a Serverless paradigm allows you to develop in any language. Serverless is a polyglot environment, enabling developers to code in any language or framework they choose. Popular language choices include: Java, Python, .NET, JavaScript and node.js.
With the increased use of serverless technologies, it becomes impractical for APM (Application Performance Monitoring) tools to require you to install an agent. Applications are further broken down into individual functions of their code and run on function-as-a-service systems. The agent used to collect data on these systems must already exist and start collecting data when the function runs.
With possibly very many microservices running across numerous VMs for even a single application, you will need a converged view of total performance. You need to know how your infrastructure is performing, whether in the cloud or not. And you need to know how the application is performing. What is also needed is contextual visibility to understand what affects application performance. A correlative view of the application and all the supporting IT infrastructure tiers will help determine the root cause: is it an application code issue, a bottleneck in the cloud, network connectivity, container performance, and so on?
When leveraging Serverless deployment models you should choose tools that combine both APM and infrastructure performance monitoring (IPM) capabilities. Serverless and associated technologies are driving the demand for converged application and infrastructure monitoring tools such as eG Enterprise.