Connectivity, productivity, digitality, and simplification have been shaping the leading technological trends for the last several years. These tendencies determine the growing popularity of various cloud services, and software architecture is also moving to serverless.
Even though much has been said and written about the serverless cloud computing, the number of discussions is steadily growing: just google “serverless conference 2018”, and you will get a list of at least fifteen events you can’t miss this year.
What is serverless computing?
First of all, let’s define what serverless technology is.
Serverless processing, compared to the overall cloud computing which arose in the early 1960s, is still an immature technology, and is tricky to define exactly.
According to MartinFowler.com, “serverless architectures refer to applications that significantly depend on third-party services (known as Backend as a Service or “BaaS”) or on custom code that’s run in ephemeral containers (Function as a Service or “FaaS”)”.
Apparently, serverless FaaS is the next step after microservices. Notwithstanding the name, serverless applications’ code still runs on servers. That’s why many authors note that “serverless” is actually a misnomer.
The difference is that compute containers are fully managed by a third party.
This approach lets developers stay focused on writing algorithms while service providers take care of deploying those web, mobile or IoT applications’ code to the cloud.
Serverless architecture use cases
For a better understanding of opportunities, strong and weak points, let’s appeal to typical applications of serverless computing.
A couple of years ago, images processing, video content and social networking were named the early frontrunners in serverless computing use cases. Whether you need to dynamically resize images, change video transcoding or cognitively analyze data on pictures taken by drones, a serverless application could be a good option.
IoT data processing is definitely a “trendy” field. When it comes to obtaining data from sensors, sending it to the cloud, processing and sending back to the particular device, the event-driven serverless approach makes things easier.
Virtual assistants and chatbots are becoming increasingly popular, and here serverless frameworks, with their ability to process events in parallel and easily cope with peak workloads, also come in handy.
Any actions that involve reacting to events, such as changes to items in storage, or in databases could be considered the most obvious uses. Traditional requests and response workloads like HTTP REST APIs, web applications, and mobile backends are also a good example here: for instance, food delivery dispatch or taxi services.
We should also mention continuous integration and delivery pipelines, as implementing DevOps with serverless architectures is being called “a tendency” more and more frequently.
Amazon was the first of the “Big Three” cloud providers to publicly launch a serverless computing service. This was AWS Lambda, announced in November 2014.
Google Cloud Functions and Microsoft Azure Functions both entered the market in 2016.
All of them offer event-driven architecture charging only for the time during which the code is triggered and executed. Let’s take a closer look at each of the giants’ brainchildren.
Lambda is a part of the AWS serverless platform, it’s a service that runs the uploaded code in response to events and automatically manages the use of computing resources. Each trigger is processed individually so that the code runs in parallel, offering continuous scalability.
Lambda-based serverless applications are composed of functions triggered by events. It’s mentioned in AWS Lambda FAQ, that a typical serverless application consists of one or more functions triggered by events such as object uploads to Amazon S3, Amazon SNS notifications, or API actions. These functions can stand alone or leverage other resources such as DynamoDB tables or Amazon S3 buckets. The most basic serverless application is simply a function.
Amazon introduces a broad variety of AWS Lambda use cases such as real-time data, file and stream processing, data validation and transformation along with web applications. Besides, serverless backends can be built using AWS Lambda to handle web, mobile, Internet of Things, and third-party API requests.
GitHub offers a very comprehensive list of Serverless Examples – a collection of boilerplates and samples of serverless architectures built with the Serverless Framework and AWS Lambda.
Furthermore, Amazon offers its own project, AWS Serverless Application Model (AWS SAM), which prescribes rules for expressing serverless applications on AWS. It even has its own funny mascot – “SAM the Squirrel”, apparently, designed to show how user-friendly and easy-to-use Amazon serverless should be ?
Over the past 3.5 years, AWS Lambda has won the trust of business titans like Autodesk, Netflix, The Coca-Cola Company, and Thomson Reuters.
Apart from Lambda, Amazon has introduced one more serverless-related service, Amazon Athena. It is an interactive query service that simplifies analyzing data in Amazon S3 using standard SQL. Athena is serverless, with no infrastructure to manage, and charges only for those queries which run. It’s already being called a promising solution for big data analytics.
Google Cloud Functions
Being a part of the Google Cloud platform, Cloud Functions features serverless application backends including integrations with third-party services and APIs, real-time data processing systems and intelligent applications as its common use cases.
The Google Cloud serverless solution has been running in its beta for almost two years, until its recent public release in July 2018. It was delivered with a set of “several important developments to the serverless compute stack”, according to Google. Those include serverless containers on Cloud Functions; Kubernetes serverless add-on; Knative, Kubernetes-based building blocks for serverless workloads; integration of Cloud Firestore with GCP services, etc.
Cloud Functions boasts Alibaba.com, Shazam, The New York Times, and Todoist as companies that have already enjoyed the benefits of their serverless architecture.
It’s fair to say that Google has been moving toward serverless since 2008, when the company launched App Engine, a fully managed serverless platform, which should be referred to PaaS, platform as a service, used for building scalable web applications and mobile backends.
Over the last ten years, they have introduced a bundle of successful “around-serverless” solutions and integration patterns designed for serverless development: Cloud Datastore and Firebase, database products; Cloud ML Engine, a serverless machine learning service; Cloud Dataflow, a stream and batch data processing and a series of integrated services.
Microsoft Azure Functions
Microsoft emphasizes the security and reliability of Azure Functions, its fully managed serverless FaaS platform. What is interesting here is that the Functions runtime is open source and available on GitHub.
Besides web and mobile applications backends, real-time file or stream processing, Azure serverless computing platform can be used for automation of scheduled tasks along with extending SaaS applications.
CarMax, Fujifilm, Marc Jacobs, and Plexure have chosen Azure Functions as their serverless provider.
Going beyond Functions, Azure offers serverless approaches in Logic Apps, workflow orchestration engine; Event Grid, event routing service; Azure Stream Analytic, real-time analytics offering for complex event processing, and more.
Serverless is growing in its popularity, not only in the number of users but in the number of solutions and technologies. The recent additions to the serverless ecosystem by major market players like IBM and Oracle should be mentioned here too. Moreover, they are open source solutions and are frequently referred to as Amazon Lambda alternatives.
IBM Cloud Functions
IBM Cloud Functions is a polyglot FaaS programming platform based on Apache OpenWhisk, an open source, distributed serverless platform that executes functions in response to events at any scale. OpenWhisk manages the infrastructure, servers and scaling using Docker containers.
Since Apache OpenWhisk builds its components using containers, it can run not only on a serverless platform but also on-premises.
IBM features its product for building serverless microservices, web and IoT applications, API and mobile backends, as well as data processing and cognitive functionality, and what’s more, event processing with Kafka or Message Hub.
Oracle Cloud Fn
Pretty similar to Apache Openwhisk, Fn is an open source container-native serverless platform that can be handled on any cloud (Amazon AWS Lambda, Google Cloud Platform, Azure Functions, doesn’t matter) or on-premise.
Fn project is a serverless framework and at the moment Oracle doesn’t offer its own FaaS platform like AWS Lambda, but presumably, they will follow IBM’s experience and introduce one more competitive service.
What do they offer to their users?
Each serverless provider has its benefits and specific features, which are important to consider when choosing your solution. AWS Lambda was the pioneer and is still the most popular FaaS provider, integrated with the widest range of Amazon’s services and products.
Google provides seamless authentication within their infrastructure and offers outstanding innovative solutions like TensorFlow, Cloud Vision and translation services.
Microsoft, Oracle and IBM are open sourced and can be hosted within their own infrastructure, whereas support for Docker might be important in terms of migration from one service to another.
Here is a brief overview of the main attributes of the services described above:
|Attribute/ Provider||Amazon AWS Lambda||Google Cloud Functions||Microsoft Azure Functions||IBM Cloud Functions||Oracle Cloud Fn|
|Type||FaaS platform||FaaS platform||FaaS platform||FaaS platform||Container-native serverless platform|
|Supported languages||Node.js, Python, Java, C#||Node.js, Python||Node.js, Python, PHP C#, F#, Python||Node.js, Python, Java, Go, Ruby, PHP, Swift, Scala + custom code within a Docker container||Node.js, Python, Java, Go, Ruby, PHP|
|Event triggers||Wide range of Amazon services: S3, DynamoDB, Kinesis, SNS, CloudWatch, etc.||HTTP, Google Cloud Storage, Google Cloud Pub/Sub, Google Firebase (DB, Storage, Analytics, Auth)||Azure Cosmos DB, Azure Event Hubs, Azure Mobile Apps, Azure Notification Hubs, Azure Service Bus, Azure Storage, GitHub, Twilio||Any external API-driven event||Any platform compatible with Docker|
|Maximum execution duration||5 minutes |
Refer to AWS Documentation
|9 minutes |
Detailed about Quotas
|5 to 10 minutes |
Refer to Consumption Plan
|5 to 10 minutes |
Refer to System details and Limits
|Costs||$0.00001667 for GB-sec, $0.20 per 1m requests.|
1m free requests per month and 400,000 GB-sec per month.
|$0.00001667 for GB-sec, $0.40 per 1m requests.|
The first 1m of requests is free.
|$0.000016 for GB-sec, $0.20 per 1m requests.|
Customers can also run Functions within their App Service plan at regular App Service plan rates.
|$0.000017 for GB-sec, no charge for invocations or API gateway access. |
400,000 GB-sec per month is free.
Node.js and Python look like ubiquitous languages for creating serverless applications. Earlier our team has researched the market and selected the best Node.js hosting services mentioning serverless functions either in this post.
Event triggers are another important point: you may stick to a definite cloud provider or be free in your choice.
All services have some kinds of limitations and quotas: AWS Lambda and Google Cloud Functions give comprehensive descriptions of those in their documentation, while it’s hard to clearly understand what is limited in Azure Functions and IBM Cloud Functions.
In particular, AWS Lambda has a default safety throttle for the number of concurrent executions per account per region. This limit of 1000 executions can be increased by submitting a request to AWS support.
The pricing scheme, based on the number of executed requests and duration of compute time, could be quite confusing at first. Most providers offer pricing examples showing total monthly spending for different types of usage. Besides, there are several third-party Amazon Lambda cost calculators for estimating the adoption cost of serverless technologies.
Pros and cons
The growing list of businesses switching to serverless applications along with the number of FaaS providers and solutions are among the biggest endorsements for this technology.
Sounds inspiring? Absolutely. But nothing is ever completely one-sided, so what are the bright spots and what are the challenges?
Serverless computing offers the following opportunities:
- Abstraction of servers eliminates the issues related to operating system and application server administration, configuration and maintenance. It is aimed to decrease the level of DevOps pain.
- Almost infinite automatic scalability. Could the workload dragon finally be defeated?
Concentrate on the application business logic, not the infrastructure. Product-centric approach rules.
- No upfront investments. Pay for the running code only.
But there are still some weaknesses to consider:
- Vendor lock-in, or look before you leap when choosing your provider. After adoption, migration to another service could be tough.
- Third-party API related issues, like serverless architecture security and privacy concerns along with unexpected limits, changes or updates. And it’s not all the matter of trust.
- Implementation drawbacks and architectural complexity. Here is a headache. It’s in testing changes which can’t be done locally, and in the number of functions which should be defined.
What to expect?
The technology is still in its nascent stage and there are many challenges to be solved, but serverless is expected to grow along with the number of adopters over the next few years. Research and Market announced in its report that the FaaS market size was estimated to grow from USD 1.88 billion in 2016 to USD 7.72 billion by 2021.
Despite all the scalability and cost optimization benefits, serverless architecture is not a magic tool for every case. It’s not suitable for long duration tasks, those requiring a response from an external resource, due to timeout limits. Heavy, CPU-consuming computations will eat up your budget quickly.
There are still a bunch of issues to be solved and concerns to be considered while migrating from a traditional application hosting to a serverless solution. Some of the most popular questions to ask are:
- Are third-party dependencies fully secure? Do I need to change something in my traditional logging mechanism?
- How do I monitor all my invoked functions? If I add another integrated cloud solution, how much will it cost?
- How much time should I invest into clearing up all the details of limits, resource allocation, and compatibility?
There have been hundreds of serverless conferences and workshops held over the last couple of years, discussing different aspects of the technology adoption for IoT, Big Data, Chatbots, Artificial Intelligence, etc. It will be interesting to see how that potential will be realized and what solutions will be offered to overcome those vulnerabilities.
On the other hand, there is a still market niche for serverless toolkits, integration and implementation solutions, which has been rapidly filled with open source and hybrid services.