The Dis.co team was out in force at this year’s Samsung Developer Conference (SDC 2019) held on October 29 and 30, 2019. The 5,800 attendees traveled
How many developers would prefer spending more of their time building new products that create value rather than configuring and maintaining production environments? Most, if not all, is the answer. Learn what serverless computing is and how it can solve this developer dilemma.
In today’s always-on world, developers are under even more pressure to build and deploy new features faster and more frequently. Whether a startup or an existing brand, success for companies in today’s dynamic and competitive environment equates to delighting customers. They must provide on-demand applications that are best-in-class for speed and responsiveness while providing easy access to data. All this while continuously implementing new features.
Developers and IT operations, or now DevOps teams, have traditionally been responsible for ensuring the production environment and infrastructure are set up to function seamlessly as new products and features are released. What if there was a better way? Serverless computing is quickly becoming the must-have solution for getting developers back to their day jobs of writing code.
Serverless computing defined
At a high level, serverless computing is a type of cloud computing that transfers the responsibility for allocating and provisioning servers from application developers or IT teams to a cloud services provider.
Serverless computing is especially valuable to developers whose core competencies [and interests] are centered around writing code for software products that solve business problems. Developers simply upload code – that specifies triggers for events and specifies the related infrastructure requirements. This code is then deployed by the platform provider into a runtime environment that supports the specific language.
At a more granularly level, serverless computing involves running server-side computations in stateless (no data) and ephemeral (short duration) compute containers. These backend services are provided on-demand, rather than requiring an application to be continuously running. They are activated by a trigger which makes them event-based.
Event-driven functions are often referred to as Function-as-a-Service or FaaS. Functions are designed to perform a specific task that can be launched on-demand and scaled individually. They are completely self-contained without any external dependencies. Functions can be linked with other functions to create a processing workflow or they can serve as components of a larger application that interact with other code. Overall, functions make development easier and their execution as events makes operations less expensive.
Use cases for serverless computing
Examples of serverless use cases can be linked to processing real-time data or back-end processing in response to event-driven requests from mobile apps or IoT devices. In certain instances, the functions of an application may be idle for an extended period. Then suddenly there will be a flurry of event requests that must be handled at once. Or, in the case of data being sent from IoT devices with limited or intermittent Internet connectivity, a function in the application would need to be activated to process the incoming data.
Companies that are part of the growing gig economy often only require computing power and resources when an event is triggered. For example, computing resources for a car rideshare app only need to be spun up when a customer requests a ride via the app. The same is true in the instance of a customer clicking to make a purchase.
Serverless computing is ideal for certain types of batch processing. One of the most common use cases is a service that uploads and processes a series of individual image files before sending them on to the next step in the workflow.
Modernization of legacy web applications is also a potential use case for serverless computing. Keep in mind there are many moving parts in monolithic applications so a phased approach will be the most likely path to success. For example, the first step might be building serverless applications to provide scalable API services to customers from legacy web applications that don’t require an investment in “always-on” infrastructure.
Rather than attempting to decouple monolithic applications into microservices, another step might involve identifying the most likely use cases for serverless implementations such as any code that responds to events. Taking this gradual approach is less risky. In effect, walking before running.
No more renting servers
Serverless computing eliminates the need for companies to “reserve” compute resources from a cloud provider to ensure they are available whenever specific functions must be run. Instead, an event triggers the requirement for compute resources which are then dynamically served up to transact the event. The pricing model goes from a monthly fee to a transaction cost based on the length of time code is running on a server – to the millisecond.
“There are no servers to manage or provision at all. This includes nothing that would be bare metal, nothing that’s virtual, nothing that’s a container—anything that involves you managing a host, patching a host, or dealing with anything on an operating system level, is not something you should have to do in the serverless world.” Chris Munns, senior developer advocate for serverless at Amazon AWS, in the Prepare for Serverless Computing 2019 report.
Just to be clear. Servers are required for serverless computing. Computing becomes “serverless” for those who no longer have to be concerned with managing scaling, capacity planning, and any other required infrastructure maintenance.
It’s important to know that serverless computing is not the solution to all computing requirements. If applications must be available 24/7 such as enterprise email or are constantly processing high workloads, serverless is not the answer.
What are the advantages of serverless computing?
- Less infrastructure management – Server management is transferred from the organization to a serverless cloud services provider. Developers simply upload code that specifies triggers for events and specifies the related infrastructure requirements.
- Scalability – When a function is written to assume horizontal-scaled parallelism, scaling is completely automatic, elastic, and managed by the provider. This allows for the maximum fluctuation of event-specific volume.
- Lower costs – Companies only pay for the use of specific resources instead of owning hardware assets or renting servers in the cloud that may not be fully occupied. Pay-per-use billing down to the millisecond makes costs directly quantifiable.
- Faster time to market – Developers write smaller chunks of code that can be tested and released faster – within an hour. This is contrary to traditional application development that may involve the build, testing and release of a complete enterprise application – which may take months.
- Freed-up resources – Developers will spend more time writing code when they are no longer saddled with allocating a portion of their time to provision, scale and maintain servers that will run their applications.
What are the disadvantages of serverless computing?
- Complexity – Many separate functions are being orchestrated across a distributed serverless system. Instead of a single complex application, there are many simple functions in a complex distributed system.
- Cost fluctuation – Since serverless computing costs are based on actual usage, monthly charges will vary. While the costs are expected to be lower, companies may find it challenging to accurately budget for serverless compared to the predictable monthly charges for standard cloud computing.
- Latency – When a function is triggered it will need to “wake up” before it can process the event. This can take as long as several seconds. Applications may include tens or hundreds of microservices which means the combined latency between each one can add up.
- Security – Due to the modular nature of serverless applications, there is added complexity to monitoring system security. It’s essential to establish metrics, tracing, and event logging that are specific to serverless computing. Read this article to learn about best practices for serverless security.
- Vendor lock-in – In the Prepare for Serverless Computing 2019 report, a third of respondents listed vendor lock-in as their biggest issue. Not surprisingly, serverless capabilities on major cloud platforms are designed to work alongside their portfolio of related services and tools. However, tools are being released that equip developers for writing serverless applications that can be deployed across multiple cloud platforms.
There is one solution that avoids vendor lock-in. It’s the simple and secure, serverless computing solution by Dis.co.
Top three serverless computing providers
Since the launch of serverless computing provider AWS Lambda in 2014, there has been a steady number of entrants in this sector. However, there are three players recognized as leaders in the space: AWS Lambda, Microsoft Azure, and Google Cloud Platform. They all provide similar features and benefits, yet they each have standout qualities that differentiate their services from others.
AWS Lambda – It was a natural evolution for AWS to introduce serverless computing to its portfolio of affordable cloud services in 2014. While it does have the advantage of easily integrating with the many other AWS services, there two main disadvantages. Latency, or cold boot time, delays the startup of serverless functions. When it comes to responding to a customer trigger, seconds matter. Vendor lock is another concern for AWS customers. Since the serverless functions are proprietary to the Amazon Cloud, it can be difficult to make the transition to a new vendor.
Microsoft Azure – Azure has a standard pay-as-you-go model for billing. It is flexible in that customers can use any framework, language, or tool, which aids productivity. Hybrid infrastructures are supported by Azure’s built-in management solutions. Companies that are already enterprise users of Microsoft technology find it easier to integrate Azure because it’s based on proprietary Microsoft technologies. Small and medium companies may find Azure too complex and expensive.
Google Cloud Platform – Google Cloud functions started off being similar to what Azure Cloud offers – before launching the Cloud Run serverless platform. Equipped with built-in Kubernetes containers and other capabilities, Cloud Run is designed to make it easier to run more enterprise workloads via containers, integration, and serverless functions. The platform provides excellent support for Windows and Linux and offers a vast array of global private networks. On the downside, Google Cloud Platform is more expensive than AWS and customers may experience integration challenges when using non-Google products.
How is serverless computing trending?
According to Prepare for Serverless Computing report released in May of 2019, 47% of respondents currently use serverless computing services, while 9% plan to use the services within the next six months. Tech Pro Research surveyed 159 tech professionals to find out why companies use—or do not use—serverless computing services. Current users are taking advantage of serverless for web app development, business logic, database changes, batch jobs or scheduled tasks, IoT, and multimedia processing.
Accenture surveyed more than 8,300 organizations across 20 industries and 22 countries to reveal the connection between technology adoption and a company’s financial performance. In their 2019 Future Systems report, they categorized the top 10% tier of respondents as leaders and the lower 25% as laggards. Of those who are considered leaders, Accenture reported that over 95% have adopted cloud computing and advanced tools such as serverless computing compared to 30% of laggards.
Amazon CTO Werner Vogles made this observation about the adoption of serverless computing by enterprises. “We normally expect younger, tech-oriented businesses as the first ones to try this out, but what we are actually seeing is large enterprises are the ones that are really embracing serverless technology,” he said. “The whole notion of only having to build business logic and not think about anything else really drives the evolution of serverless.”
Wrapping it up
What’s important to remember is that serverless computing is not the end-all, be-all solution to meeting all infrastructure requirements. Specific types of workloads will most certainly benefit from serverless when they are run efficiently and securely. Others, depending on factors such as high workloads or strict privacy, security or compliance requirements, will continue to run in a hosted or on-prem environment.
While some consider the use cases for serverless to be niche or limited, the reality is serverless can serve those niche and limited use cases extremely well. The question that needs to be answered upfront by application developers is whether or not serverless makes sense for their specific use case.
Ramanan Ramakrishna, Cloud CoE Lead at Capgemini, suggests “organisations should work towards incorporating serverless cloud into a digital transformation agenda for ‘born in the cloud‘ initiatives rather than force-fitting it into traditional applications and associated methodologies”.
If you are considering a move to serverless computing, Gartner analyst, Arun Chandrasekaran offers this advice: “Always know how to walk before you run. Maturity in your existing cloud processes and existing cloud skills, in terms of people and processes, is extremely critical for you to be successful with serverless.”
In the end, decision-makers must accept that there are always tradeoffs associated with choosing one technology solution over another. It’s no different with serverless. Serverless is one of many available tools.
To learn more about the next generation of serverless computing, Contact Us for an introduction to Dis.co, the simple and secure serverless computing system that helps you avoid vendor-lock and more.