I learned a lot of valuable information and met many intelligent technology experts at the Amazon Web Services (AWS) re:Invent 2016 conference this past December. But there was one specific takeaway that stuck with me more than any other. One word that I heard in almost every session that I attended: "AWS Lambda".
This is how and when I was introduced to the topic of "Serverless Architecture". Obviously, in a computing paradigm, if we want something to compute, we need an execution engine that runs on a CPU (i.e. a Server). So, is "Serverless" a new way of computing? Does this spell the end of Servers? Is Serverless the right architecture?
Let's start with a brief evolution of computing to help us better understand what this trend means and where the industry is heading.
Evolution of Computing
When we look at the evolution of computing, it’s very clear that we’re moving rapidly towards greater abstraction between the application developer and the underlying hardware. This motivation has led to serverless architecture. In this model, the developer builds and runs applications and services without having to manage the infrastructure. The application still runs on servers, but a 3rd party handles all the server management. You no longer have to provision, scale, and maintain servers to run your applications, databases, and storage systems. Obviously, this abstraction takes many of the knobs out of your control as a developer. So the term "Serverless computing" means that the business or person who owns the system does not have to purchase, rent or provision servers or virtual machines on which the back-end code runs.
AWS has been one of the strongest advocates of "Serverless computing", particularly since introducing AWS Lambda. AWS Lambda has become synonymous with "Serverless Computing" or "FaaS": Function as a Service (yet another *aaS acronym). AWS Lambda allows developers to write code in Node.js, Python, Java, and C# script, and these functions are the units of deployment. There are definitely many other big players in this space like Microsoft with its "Azure Functions" and Google with "Google Functions".
Let's look at an example to better understand the concepts. The below diagram details a scenario in which you upload a file to AWS S3 and then compress and re-upload it to S3. The code to compress this file can be written as an AWS Lambda function. Once the Lambda function is set-up, your application can upload as many files as it wants without worrying about server manageability, scalability or, high availability. All of this is managed by Amazon, and you only pay each time your Lambda function (code) is executed. So, if you are compressing 100 files, your Lambda function is called 100 times and you pay only for that. You do not have to pay for the servers or for managing it.
Now, let's explore a couple scenarios to better understand how Dell Boomi AtomSphere can fit into a Serverless Architecture paradigm.
Boomi and Serverless Architecture
Scenario 1: Lambda Function Callout to Boomi Process REST API
Boomi allows you to expose any process as a REST API via API Management. As we have already discussed AWS Lambda, Azure and Google functions support various languages as well as plug-ins to call a Web Service/REST API. We can leverage this functionality and invoke a Boomi process when certain actions occur in AWS, Azure or Google.
The above use case represents a use case where once a file is uploaded to S3. A Lambda function is invoked that sends the data to Amazon SQS. The next step is to insert data into RDS but the data needs to be transformed and translated. You can leverage Lambda functions to call Boomi processes that will translate, transform data. This introduces a new pattern and an effective solution to much of the middleware functionality that is missing in AWS Lambda, Azure, and Google functions. This introduces a new design pattern as FaaS becomes more popular with enterprise architects.
Scenario 2: Native Cloud to Cloud Integration
In the above use case, data residing within several files needs to be transformed, enriched and then inserted into a database. In this scenario, we can use an AWS S3 or Azure Storage to store the files and leverage the Azure SQL DB or AWS Aurora as our end destination. The only remaining piece here is the actual transformation and conditional logic that we need to apply on the data before the data reaches the destination. This is where Boomi AtomSphere, our integration platform as a service (iPaaS), comes into the picture. Using the platform and the branded connectors for both AWS or Azure, you can build a process to read/write data and then transform and enrich the data. We can leverage Boomi's multi-tenant cloud run-time environment to execute the process.
Let's try to understand what we actually did here. We solved a use case that moved data from a hosted MFT to a hosted DB and the code (Boomi process) executed on a hosted runtime. We achieved all of this without having to install any software/servers and worrying about server manageability, scalability or high availability. In essence, the Boomi process behaves as a "FaaS" in this scenario. And the platform and runtime satisfy all the requirements of a "Serverless Architecture".
Now I am not trying to force fit Boomi platform to be a FaaS. I am bringing to your attention that you can leverage Boomi platform and to build and execute the functions without having to worry about infrastructure or manageability. But again, it's a new design pattern for enterprise architects to think about.
Pros and Cons
Now, let's look at some of the pros and cons of Serverless Architecture.
Faster deployment - You need to deploy only the code, all the infrastructure is taken care by the FaaS provider.
Startup latency - Resources are shared and not dedicated to a single tenant. Hence, there will be delays in the initial execution until all the resources are spin up and dedicated
Reduced operations cost - No hardware means you don't have to worry about tools/infrastructure, thereby reducing cost.
|Vendor lock-in - This is true for any resources that employ 3rd party and hence in this case too.|
Scaling costs - Not paying for resources you are not using when they are idle.
|Limited configuration - Limited access to resources means not enough control, and hence limited ability to tune in/out.|
Mode 2 - Ideal for Mode 2 (as defined by Gartner) projects, as you can spin up/down resources quickly and hence helping in "Win fast, Lose fast, but most importantly, Learn fast."
|Monitoring/debugging - It can be challenging to figure out the root of the problem as access is limited. Also, the in-built IDEs will be very limited in functionality.|
Serverless Architecture is a general concept and a derivative of it is FaaS that allows server code to be executed as hosted independent functions. Some of the popular vendors in this space are AWS, Azure and Google. One of the most popular pricing models for many enterprises is "pay-per-use" and serverless architecture supports that by default. As, we have discussed Boomi can behave as a FaaS, as well as you can leverage Boomi process in the FaaS of your choice.
As with any architecture, there are advantage and disadvantages with Serverless Architecture and it is not a solution to solve all the problems. Serverless is a new design pattern, though it is in its infancy, it is rapidly evolving. Enterprise architects should start taking this pattern seriously and start incorporating in their design.
It will be interesting to see how enterprises adopt it and the creative solutions that come out of this. I'm curious to know: how do you see Serverless Architecture playing a role at your company?
References and Additional Reading
- Serverless Architectures - martinfowler.com
- Serverless computing - Wikipedia
- AWS re:Invent 2016: Getting Started with Serverless Architectures (CMP211) - YouTube
- AWS re:Invent 2016: Serverless Architectural Patterns and Best Practices (ARC402) - YouTube
Thameem Khan is an enterprise architect at Dell Boomi and has been instrumental in competitive positioning Dell Boomi to our Large enterprise customers. He is good at two things: manipulating data and connecting clouds.