Arrow Down
Development

Everything You Need To Know About Serverless Architecture

Nisha Gopinath Menon
-
September 27, 2017

Since the 1960s, cloud computing has allowed us to take advantage of its manageability and elasticity. It gave us the power to own a new server. It opened a whole new door to higher level platform services like API gateways, queues, authentication and more. Are we to embrace serverless next? Most think and talk of serverless in the same breath as FaaS (functions-as-a-service) products, understandably so, and since they were disappointed with those, they overlook serverless entirely. Well, they’ve missed out. Seeing as you are here, your view isn’t as narrow.

Defining Serverless

It’s two key features are centered around invisible infrastructure in place of configured VM images and billing based on invocation, rather than the usual hourly fee. Also, it isn’t as nebulous as you believe, most of the cloud is already serverless. The part which makes many programmers uncomfortable is the idea of their code being serverless. How are they supposed to debug and monitor the environment? What’s the plan for hardening the server? Is “serverless” merely a natural evolution of platform as a service (PaaS) and infrastructure as a service (IaaS), or a paradigm shift?

Serverless? The code has to run somewhere after all.

True. The hardware is very much still there, so when they say serverless, it’s a level-of-service abstraction, an illusion meant for the developer’s benefit. Serverless architecture shifts the responsibility for running the environment from the user to the cloud provider, and therein lies a compelling reason for you to embrace this assistance. Here, the responsibility for managing and provisioning the infrastructure lies entirely on the service provider. Hence, the developer can focus solely on writing code.

In a traditional approach, once the developer implements services or functions that communicate amongst themselves and the outside world, these services are combined into some distributed processes when installing, running, monitoring and updating. These processes run or operate on virtual machines. In the serverless approach, the service provider is responsible for everything from the processes to the operating systems to the servers. Meaning, you do not need to purchase a dedicated server anymore. At the same time, the service provider has the freedom to decide how to use their infrastructure efficiently to serve requests from all clients.

BaaS and FaaS

Serverless architecture is often considered the evolution of Platform-as-a-Service. In a serverless architecture, applications either depend on third-party services known as Backend as a Service or “BaaS” or on custom code run in ephemeral containers under Function as a Service or “FaaS.”

BaaS

The rising number of third-party service providers implementing required functionality, providing server-side logic and managing their internal states has led to applications that don’t have application-specific, server-side logic and use third-party services for it all. In this vein, these applications are serverless.

FaaS

AWS Lambda and Azure Functions happen to one of the first to offer serverless computing with broad usage. Both host bits of code which can be executed on demand. Here instead of writing applications, you write pieces of applications with event rules which trigger your code when needed. You don’t concern yourself about the server since your code executes. And you’re billed by fractions of a second, measuring memory and CPU usage, in the functions-land.  You just scale invisibly, whether you're getting one or a million invocations a day, it doesn’t matter. So yes, we are serverless today.

But FaaS comes with its issues. All workloads do not lend themselves to an event-based single-operation triggering model, and neither can all code be cleanly separated from its dependencies, and some dependencies can require quite intensive installations and configurations. And migrating an existing application to a functions-as-a-service model is usually complicated enough to be financially impossible. Moreover, enough tooling issues are making FaaS products painful to use for certain scenarios. FaaS ideally fits with microservices architecture since FaaS provides everything an organization requires for running microservices.

Server

Everything You Need To Know About Serverless Architecture

What is not serverless?

What differentiates PaaS from Serverless is that traditional PaaS providers like OpenShift and Heroku do not come with the automatic scaling feature. If you’re using PaaS, you have to specify beforehand how much resources your application will require. Well, it’s still possible to scale the application up,  manually or down by changing the number of assigned resources, but the thing is, this is the developer’s responsibility. Secondly, a traditional PaaS is designed for long-running server applications. With such a design, the application is always running to serve incoming requests. If you bring down the application down, it will obviously not serve the requests. With FaaS, your function begins to serve a request and terminates it the moment it is processed. Ideally, when there is no incoming request, your functions should not be consuming the resources of the service provider.

How are containers different?

The rise of Docker primarily marked the advent of the container technology. These containers set the standard for the next generation of the Virtual Machine. See the thing about containers is, it isn’t merely a replacement for VMs by a more lightweight abstraction. It’s also acting as a general-purpose packaging mechanism for applications. Simply put, containers are packaged mini-servers running on a container host, which is basically yet another server. The problem with containers is you have to get your hands dirty. One of the tenets of serverless is allowing the programmer to focus on the code, not the plumbing. Containers are a great tool, but with them, you have to deal with a technical payload of maintaining the contained environment. For containers to go truly serverless, they need to do something about the VMs waiting for work. As in they need to adopt invocation-based billing based on detailed consumption measuring. Nevertheless, there is an inherent similarity between serverless and containers.

Why should I go serverless?

  • Agile methods have received a lot of flak for being short-sighted. It’s methods speed up the development process, but the users don’t get access to these products at the same pace. When used well, serverless architectures will solve this issue. The launch of cloud computing has decoupled application delivery from having to own and manage the infrastructure. The cloud made it a possibility for you to deploy an application without having to worry about hardware. Consider shrugging off low-value work to allow greater investment in high-value application functionality.
  • See cloud computing did get rid of the infrastructure investment. Even so, we’re all guilty of letting cloud virtual machines run endlessly while doing no productive work. Managing capacity, even the virtual one is a hard task. If you go serverless, instead of operating an execution environment the entire time, it allows code execution only when necessary, triggered by some event representing a request for computation.
  • Serverless architectures are a perfect fit for event-driven, episodic applications and applications subject to unpredictable traffic patterns. These are in turn a terrible match when it comes to traditional virtual machines or container focused application architectures.
  • Serverless computing cost savings will prove to be dramatic. Most of us don't truly appreciate the extent to which serverless computing can be cost-effective. To begin with, once you start using serverless architectures, you stop paying for idle resources. Most organizations don't have a handle on how big these costs tend to be. With serverless, all this wasted money comes back to you. Once people recognize the financial benefits that accompany serverless computing, there will be a huge growth in interest, experimentation, and adoption.
  • There will be no need for developers to implement code to scale or system administrators to upgrade existing servers or add additional ones. Serverless architectures make it a seamless and transparent to play around with server capacity to match your business needs.
  • Deployments become a lot easier. You see, versioning is built right into the system, so fallbacks become simple too. Another significant responsibility, off the programmer's shoulder.

What are the drawbacks of going serverless?

  • Serverless won’t prove equally efficient for long-running applications. In fact, making use of serverless architectures for long running tasks can be much more expensive than running the workload on a dedicated server or virtual machine.
  • Serverless can also easily add complexity, rather than reduce it. Proper tooling and ecosystem companies need to step in, to address this issue.
  • Vendor lock-in, as in your application will be entirely dependent on a third-party provider. Here you won’t retain entire control of your application. It’s even likely that you won’t be able to change your platform or provider without making significant changes to your application. Moreover, you are dependent on platform availability, and the platform’s API. Costs too can change. And of course, the present FaaS implementations aren’t compatible with each other.
  • Additional overhead for function or microservice calls is introduced in serverless and microservice architectures. There are no local operations. Don’t assume that two communicating functions are located on the same server.
  • For the sake of utilizing their resources more efficiently, a service provider could be running their software on the same physical server for several different customers, also known as multi-tenancy. In the early stages, even if the workloads of different clients are isolated in virtual machines or containers, there can be different bugs. This could be a major security issue for your application if you’re working with sensitive data. These potential bugs in a platform or even failures in one customer’s code can affect the performance or availability of your application.
  • It does take some time for a scalable serverless platform to handle the first request by your function. This issue is known as “cold start.”  You see, a platform needs to initialize internal resources. The platform could also end up releasing these resources if there have been no requests to your function for a long time. You could avoid the cold start by ensuring your function remains in an active state by sending periodic requests to your function.
  • Certain vendors do not provide out-of-the-box tools to test functions locally if in case you use the same cloud for testing. This isn’t ideal since you will end up paying for every function invocation. Several third-party solutions are trying to fill this gap by enabling you to test functions locally.
  • Different vendors provide different methods for logging in functions. It is completely up to you to ascertain how to implement more advanced logging.

Serverless architecture is a new method of writing and deploying application allowing you to focus on the heart of your application, the code. This approach reduces the time to market, operational costs and system complexity. While serverless leverages third-party vendors to eliminate the need to setup and configure physical servers or virtual machines, it also locks in your application and its architecture to the particular service provider. In the coming days, we expect more movement towards frameworks that will help us avoid vendor lock-in and enable us to run our serverless applications on different cloud providers or even on-premises. So many of the cons mentioned above will mostly be solved shortly.

Bottomline:

It’s time to shrug off the responsibility onto mightier shoulders. Cloud computing has brought enormous change to the world of applications. In fact, much of the innovation in Information Technology over the past decade has been thanks to cloud computing. Some go as far as to say we're on the verge of another cloud revolution, the move to serverless computing. It promises to change application paradigms and holds out a possibility of moving to a post-virtual machine, post-container world. Of course, it also goes without saying that serverless computing will break a lot of existing practices and processes.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Need help with product design or development?

Our product development experts are eager to learn more about your project and deliver an experience your customers and stakeholders love.

Nisha Gopinath Menon
Bangalore