Deploying Backends on AWS
Cloud is slowly becoming the de facto standard of deploying applications for large enterprises. It allows big companies to move from a CapEx-heavy model of building and maintaining data centers to an OpEx pay-as-you-go model.
Amazon Web Services (AWS) is one of the leading cloud providers and the oldest to offer cloud services. With over 15 years of experience providing cloud services to customers of different scale and industry, AWS has gained a lot of industry expertise and has added numerous services to cater to a wide range of customers and their needs. Each service varies in terms of management, costs, and performance characteristics.
In this article, we’ll cover all the services that AWS provides to deploy the backend services of an application and how to choose between them based on where you are in your cloud migration journey.
In this section, we’ll cover the following services and see when each should be used:
- Elastic Compute Cloud (EC2)
- Elastic Container Service (ECS)
- Elastic Kubernetes Service (EKS)
- AWS Fargate
Elastic Compute Cloud (EC2)
This is the oldest compute offering by AWS that provides on-demand virtual machines (VMs) provisioned in the cloud. Also called Infrastructure as a Service (IaaS) in cloud terms, you can configure these machines for vCPUs, RAM, disks, and GPUs based on your needs.
The VMs are priced competitively and are billed per minute. There are many predefined instance types based on the type of workload you wish to run on them. For example, there are general purpose, memory optimized, compute optimized, etc., each providing different CPU, RAM, disk, and GPU configurations.
If you are migrating from a data center (where you already have a VM-based deployment) or are just getting started on your cloud journey, you can choose to deploy your backed services to EC2. EC2 allows you to lift and shift your existing application and deploy it on cloud. You can even run your containerized applications on EC2 using Docker.
Or, you can use services like AWS Application Migration Service to perform an automated lift and shift. This is the fastest way to migrate your existing applications if you already have a VM-based or container-based deployment.
You also have Auto Scaling groups, which allow you to horizontally scale out your applications if needed.
For routing traffic to your application, you can either use a self-hosted reverse proxy like NGINX/Apache, or there’s AWS Elastic Load Balancing to provide global routing to your backends and which is fast, secure, and highly available.
Elastic Container Service
Although you can deploy your containerized workloads to EC2 machines, it is preferred to deploy them via Elastic Container Service, as it is a fully managed container orchestrator. It allows you to deploy your containers in multiple availability zones, making your backend services more resilient to zonal failures.
ECS deploys your containers across a fleet of EC2 instances managed by AWS itself and lets you pool all of your resources (CPU and RAM) across all of those instances.
To do this, you first define a task definition, which defines all the containers you want to deploy and the CPU requirements for each of them. Then, you define a service, which defines min and max instances for each container that can be run, allowing you to fine-tune the scaling for the best price-performance ratio. All of these containers run on an ECS cluster, which is a collection of EC2 VMs running Docker and ECS Container Agent.
You can choose ECS if you already have your application containerized and want a managed way to deploy and scale your application. Like EC2, you can use AWS Elastic Load Balancing to provide global routing to your backends that is fast, secure, and highly available. ECS also allows you to configure logging via CloudTrail and monitoring via CloudWatch.
Elastic Kubernetes Service (EKS)
Another way to deploy your containerized applications is to deploy them via Kubernetes. If you already have an on-premises Kubernetes setup, you can migrate that to EKS with minimal changes. Kubernetes is an open-source container orchestration platform that allows you to declaratively define (via YAML files) your desired application deployment state, and the cluster internally does whatever is needed to achieve that state. EKS is Kubernetes-conformant but usually runs a release behind the open-source version; however, it cuts down on the management overhead significantly.
Underneath the Kubernetes cluster is a fleet of EC2 VMs, which have the necessary components such as Docker, Kube API, etc. installed. Like ECS, EKS also pools the resources of the entire cluster and gives you a consolidated view of them. You can configure resource requirements for each pod (the deployment unit in Kubernetes) and also set the upper limits for each pod for better resource management.
Pooling resources allows for better resource utilization of the underlying infrastructure, while using Kubernetes also lets you avoid vendor lock-in. With very minimal effort, you can migrate your workloads to other Kubernetes distributions as well, whether to another cloud provider or any independently hosted Kubernetes solution like Rancher.
You’ve now seen how ECS and EKS let you orchestrate your containerized applications. But underneath ECS and EKS clusters, there are EC2 VMs running, which for the most part are fully managed; still, there are some issues in scaling up and down.
To make the use of ECS/EKS even more seamless, you can use AWS Fargate as the compute offering underneath your ECS and EKS clusters. Fargate is a serverless compute platform that allows you to run and scale containers. If you’re currently using ECS with EC2 VMs, you can migrate to Fargate without any change.
Fargate has no upfront cost, and you only pay for what you use. The underlying compute of ECS/EKS scales up to the requested resources (CPU, RAM) and can quickly scale up/down based on further resource requirements. Your clusters (both ECS and EKS) can scale down significantly when there is no/less traffic, meaning you can save a big chunk on infrastructure costs.
Now that we’ve covered different services you can use to deploy your backend services, you don’t need to opt for just one. All the AWS services integrate with each other nicely, so you can choose a mix to deploy your applications. For example, you can start by migrating your application to EC2 with minimal changes and then incrementally move services to other solutions such as ECS or EKS.
No matter which backend you choose for deploying your application, you can always use Sidekick to easily add dynamic logpoints and tracepoints to your app via a web IDE. The installation and configuration is pretty straightforward, and Sidekick integrates with popular IDEs IntelliJ and VSCode as well.
Sidekick’s logpoints and tracepoints don’t require any restart or redeployment of your application, and adding dynamic logpoints helps you save highly on logging costs by making sure you can dynamically add and delete them when needed.
In this section, we’ll see a small demo of how you can use Sidekick’s dynamic logpoint and tracepoint capabilities with a Java application. We’ll use a containerized Java application and add the Sidekick agent without changing our code. You can follow the instructions from the official documentation here. Then, you will see how easy it is to add tracepoints and logpoints.
Check the demo web API based on Spring Boot here. Once you’ve built the executable JAR, you can create the container using the following Dockerfile:
Once you build your container image, you can pass the following values to your container using environment variables based on the choice of the AWS service where you’re deploying:
After the application starts running, you’ll start seeing the application sidekick-aws-demo in your Sidekick dashboard.
Then connect your source code repository to your Sidekick web UI. You can also use the IntelliJ or VSCode plugin to avoid this step. In this post, We’ll connect our GitHub repo to load the source code. Once connected, you can see your code in the web IDE.
Let's add a tracepoint. Open the GreetingController.java file, and add a tracepoint on line 17.
Now, call the deployed web API website. You’ll see that without pausing your application, Sidekick captures the state of variables on your tracepoint and shows them to you.
You can see the value of the counter variable captured in our example. We called the API endpoint 7 times, so I can see 7 snapshots.
Next, let’s add the dynamic log points. We’ve added a log point at the same (line 17). As you can see in the screenshot below, currently the count is zero. Once a logpoint is added, call the API a couple of times, and you’ll see each time the logpoint is captured and shown in the dashboard. Below, we can see 4 logs since we called the API 4 times.
That’s how easy it is to add dynamic logpoints and tracepoints to your application without any code change and restarts or re-deployments.
In this post, we covered different AWS services you can use to deploy your application’s backed services. There’s no defined way to select your backend. It all depends on where you are in your cloud migration/deployment journey and how much experience your developers have.
Still, this post can serve as a general guideline to help you decide between AWS’ offerings.
You can choose any of these services or a combination of them to deploy your backend services and then use Sidekick to have dynamic tracepoints and logpoints without needing to change your code or redeploy.