Your Privacy

This site uses cookies to enhance your browsing experience and deliver personalized content. By continuing to use this site, you consent to our use of cookies.
COOKIE POLICY

Understanding Serverless Computing

Understanding Serverless Computing
Back to insights

Most consumers think of cloud computing simply as off-site storage space. But there’s been a rising trend in the last few years to host and run code in the cloud. As the features have matured, it’s become simple to use serverless computing services for applications.

Advantages

The primary advantage to serverless computing is simplicity. The developer doesn’t need to consider the hardware requirements or worry about provisioning virtual servers, just writing the functionality of the application and uploading to the host provider. It lets the developer focus on developing and leaves the hardware and server maintenance tasks to professionals in that skill area.

The other advantages are flexibility and cost. Platforms like Amazon Web Services’s Lambda service only make you pay for computing time actually used by your application. Your code is actually hosted for free and you get charged based only on use. So an application that only runs once per hour can be insanely cheap. An application that has constant database triggers will run more frequently, but the nature of the cloud setup allows the host to expand on demand so requests aren’t waiting to be executed like they might be otherwise.

Disadvantages

With any great advantages, come tradeoffs that have to be taken into account. Using a serverless setup means that, obviously, you lose control of the servers. For those with sensitive data or complex hardware requirements, dumping code into an unknown computing black hole may not be acceptable. The vast majority of applications should be able to trust that the hosts provide fast and efficient hosting, simply because of the volume of applications and requests that they handle.

The other disadvantage is that it requires a different way of architecting larger applications. Each code instance should be modular and run only when an appropriate trigger is fired. Rather than weaving a complex spaghetti instance of mini-applications calling each other constantly, execution should be driven from user input or database changes, so that each function has a very defined input and result. While this just sounds like good coding practice, in my experience most legacy systems do not work like this.

When to use

Applications that run as integrations between existing systems work well. UDig has several implementations of this type running between internal applications to keep data in sync. When the system of record updates, the update is seamlessly pushed to the reliant systems.

Back ends for small applications are also a great fit. If there is little traffic to the website, your cost is kept extremely low while you still get the performance of a large volume server. It can also be a great solution for side tasks that don’t fit into the standard architecture of your application, anything from user authentication to checking an external system for updates.

Providers

The primary providers of serverless application hosting are AWS Lambda, Microsoft Azure, and Google Cloud Functions. Amazon pioneered the space, but competitors have quickly realized the feasibility of the model and are catching up quickly.  And, as we recently experienced, even the big players can experience outages which can impact your uptime.  Read How to Avoid a Cloud Calamity by Andrew Duncan or check out a host of UDig cloud resources here.

Digging In

  • Development & Modernization

    Keeping Infrastructure out of the Way of Application Goals

    Every new application starts with an idea.  Whether creating a new site, the next big social media platform or just trying to make a process workflow easier, the idea is the driving force where teams want to spend most of their time perfecting the user experience. However, once a project moves out of the proof of […]

  • Development & Modernization

    Docker Brings Scale at Cost

    Our UDig Software Engineering team has always been a cloud–first engineering group, but while inventorying our internal application and cloud usage we were disappointed at our spend and over-usage of EC2 instances with AWS. Through research and experience we knew if we could containerize our applications with Docker, we’d then be able to consolidate and maximize our servers‘ output. Before we […]

  • Development & Modernization

    Is Lift-and-Shift a Valid Option for Cloud Migration?

    There are a few valid cases for doing a “lift and shift” cloud migration, but it should always be challenged, especially when it is used as a stepping stone in your long-term strategy. Legitimate time, technology, or capacity constraints within existing data centers are valid reasons to begin with a “lift and shift” migration. If there is true necessity for an organization to meet a certain deadline […]

  • Development & Modernization

    Minimize your Cloud Debt

    Technical debt is a real liability. If you have spent any time working at a company that relies on technology, you have most likely felt the impacts of technical debt. It comes in many forms and is difficult to identify in many cases, but one thing is for certain – it has compounding costs. As […]

  • Development & Modernization

    Architectural Simplicity

    With all of the cool modern frameworks, database options and other technologies available to us it’s easier than ever for software solutions to become overly complex. Sometimes this is a reflection of the complicated problems we are solving. However, in many cases it is the result of poor planning or adding functionality without enough consideration […]