Network architecture (AWS)
Release environments depend on networking components from both Kubernetes and cloud providers (for example, AWS and GCP). Here's an overview of our architecture from a networking perspective, and an example of how traffic flows through each step.
This document uses AWS names for simplicity. If you're using GCP, in most cases the architecture is the same but uses GCP equivalents to the AWS components.
Architecture overview
The main cloud components used in Release's architecture are as follows:
Route 53 for public DNS mapping and to assign unique URLs for each of your permanent and ephemeral environments.
DynamoDB to keep a table mapping each URL to a specific environment or S3 bucket.
CloudFront and Lambda@Edge to route incoming traffic to the correct environment or S3 bucket.
S3 to host static websites and assets.
Internet Gateway to allow incoming traffic into a VPC.
Elastic Load Balancer (ELB) to route traffic to the Kubernetes Ingress Controller.
Elastic Kubernetes Service (EKS) to run Kubernetes node groups.
When a user visits a URL, the request passes through Route 53. We look up the URL in the DynamoDB table and route the request either to S3 (for static websites) or to a load balancer (for containerized services). In the case of containerized services, the request then passes through to Kubernetes Ingress Controller, and then to the specific Kubernetes pod running the associated service.
Let's take a closer look at how this works with an example.
Following a network request
Let's assume we're running three services, each in their own environment.
env-123.example.com
runs a static website.env-456.example.com
runs a static website.env-789.example.com
runs a containerized service with a frontend JavaScript application and a backend API.
We'll follow a user's request from the first DNS request until it hits S3 (for the static websites) or a Kubernetes pod (for the containerized service).
1. Resolving DNS and URL mapping
The first service we hit is Route 53, which handles global DNS. Route 53 routes the request to AWS CloudFront. Specifically, Release does this using AWS Alias records. An Alias record is a Route 53–specific extension to standard DNS functionality, which can route traffic to specific AWS resources, including CloudFront, instead of to an IP address or another domain.
Usually, CloudFront is used as CDN to serve static files from machines close to your end user's location, but Release also uses it to intelligently route traffic based on environment type. It does this by using a Lambda@Edge function to look up the URL in a DynamoDB table. The DynamoDB table contains a record of every Release environment, including the URL and the Release service it maps to.
For example, the simplified DynamoDB table below shows that environments 123 and 456 should be routed to specific S3 buckets, while 789 should be routed to a specific load balancer.
Requests for static websites are immediately served from S3. If the user's request maps to a containerized application, it will be routed to an Internet Gateway and then onwards as described in the next section.
You can see an overview of this in the diagram below.
2. Load balancing and ingress
Your Kubernetes clusters are run in a private subnet in your VPC, and are therefore mainly inaccessible from the outside world. The request for environment 789 is sent through an Internet Gateway to an elastic load balancer, and then passed on to a Kubernetes Ingress Controller, which will send the request on to a specific service running in a pod.
More specifically, the request first passes through AWS Internet Gateway to get into your VPC. From here, it hits an Elastic Load Balancer which lives in a public subnet of your VPC. The Elastic Load Balancer passes the request to EKS, which runs in a private subnet of your VPC.
First the request hits the Nginx Kubernetes Ingress Controller, and then gets routed to a specific Release service, running within a Kubernetes namespace.
In this example, our service consists of a frontend application running on port 8080 and a backend application running on port 3000. The Kubernetes Ingress Controller routes the request to our frontend port, which then communicates with the backend service. The backend service runs any application logic, and then passes data back to the frontend, from where the result can be passed back through to the end user.
The actual Kubernetes pods are hosted on AWS EC2 instances, which are controlled by EKS.
You can see an overview of this in the diagram below. The internet gateway at the top is the same as the one shown in the bottom left of the previous diagram.
Considerations and simplifications
This is a high-level overview of Release's networking architecture. In reality, there are a few other components at play too, including:
Security groups. Each AWS service, including each underlying EC2 instance, has one or more security groups, which act as stateful firewalls. While Release automatically creates and assigns the required security groups for these components to communicate with each other, you may have to manually configure access to other internal services (such as your RDS database) or external services (such as Stripe or Twilio). You can read more about how to integrate these in the documentation for cloud-provided services and third-party services.
Other load balancer types. Release may spin up other load balancer types in certain situations. Usually, this will not affect you, but contact us if you require further information.
Availability zones. By default, Release creates resources in multiple availability zones within each region. This mitigates adverse effects on your environments if there are problems in a specific AWS data center.
Last updated