Network architecture (GCP)
Last updated
Last updated
When you create environments, Release sets up network access from the internet to services in your environments. For environments hosted on GCP, connectivity is handled through GCP's networking services.
This is an overview of how Release sets up GCP to allow visitors to access your applications.
Release configures the following GCP services to direct traffic to your applications:
Cloud DNS points subdomains at load-balancer frontend IP addresses so that visitors' web browsers know which endpoints to connect to.
Google Cloud Load Balancer acts as a Layer 7 load balancer that terminates visitors' HTTPS requests and routes traffic to backends based on URL maps.
URL maps contain a set of rules against which incoming requests are matched so that requests are routed to the appropriate backends.
Load-balancer backends connect the load balancer to network endpoint groups for your applications.
Pods in Kubernetes clusters connect to the default project-wide virtual private cloud (VPC) network for your Google Cloud project.
VPC firewall rules allow traffic from the external load balancer to IP addresses in your VPC.
Google Kubernetes Engine connects incoming requests to pods within your Kubernetes clusters through an Nginx proxy.
When a visitor opens a URL for one of your services in a web browser, the visitor's browser sends a DNS query for a subdomain in your zone in Google Cloud DNS. A DNS resolver returns the IP address for a GCP load-balancer frontend.
The visitor's browser then connects to the load balancer via HTTPS, and sends a request.
The load balancer forwards this request on to Kubernetes, which serves a response from one of your applications.
We'll use an example to see how GCP networking routes traffic.
Our visitor wants to connect to an application running in our Release environment. Let's follow their request to explore Release's network configuration of GCP in more detail.
In a Release environment, each service that is accessible from the internet has a unique URL. You can find the URL for a service by visiting the Environment Details page in Release, as shown below.
To start the request, our visitor clicks on a link or enters the URL in their browser, for example, https://app-service.staging-env.example.com
.
The visitor's web browser queries their DNS resolver for the hostname app-service.staging-env.example.com
. The visitor's DNS resolver, through a series of DNS servers, queries one of GCP's authoritative name servers for the hostname.
Google Cloud DNS responds to the visitor's DNS resolver with the IP address.
How Release configures this DNS record: When we created our environment, Release added an A
record associating app-service.staging-env.example.com
with the IP address of a load-balancer frontend, for example, 216.239.32.108
, to Google Cloud DNS. The hostname used is based on the hostname templates configured for our application.
The visitor's browser completes a TLS handshake with Google's load-balancer frontend, so that all further traffic between the browser and the load balancer is encrypted.
The domain-verified SSL certificate used for this connection is created in advance by Release by verifying the domain with a certificate authority via DNS-based verification.
After the connection is secured, the browser sends the request:
The load balancer now matches the request against a URL map, which associates the request with the appropriate load-balancer backend.
In our app's load balancer, the URL map would look like this simplified example:
Since the host and path in our visitor's request both match this URL map, the load balancer directs the visitor's traffic to the load-balancer backend called release-kne0hndklwzmg90rp0iif97ukyz4cd7o
.
If for some reason, a request does not match any URL map rules, Release will forward the request to https://none-such.releasehub.com
.
How Release configures this URL map: When we created our environment, Release created a URL map associating the service's hostname and paths with a load-balancer backend, and added the URL map to the load balancer.
GCP's load balancer now directs the visitor's traffic to a backend service.
A load-balancer backend in GCP is responsible for connecting the load balancer to network endpoints. In our example, the network endpoint is a pod in our Kubernetes cluster.
GCP simplifies connecting to multiple endpoints at once by adding endpoints to network endpoint groups.
Release configures a network endpoint group (NEG) per network zone (for example, us-west-1-a
, us-west-1-b
, and us-west-1-c
) for each environment and adds the internal VPC network IP addresses for servers running in these environments to each group.
If you configure session affinity for your applications, the backend service keeps track of affinity at the network-zone level. The backend service uses health checks to keep track of healthy endpoints that can handle requests.
Because the endpoints in our clusters are connected to a private network, the project-wide default VPC, our backend service needs to pass through the VPC firewall.
The firewall also needs to allow health checks from the load balancer.
Release configures VPC firewall rules to accept traffic from Google's load balancing IP ranges: 130.211.0.0/22
and 35.191.0.0/16
.
Once the visitor's request reaches our Kubernetes cluster, a Kubernetes Nginx ingress controller routes the request to the correct pod to handle the request.
For this example, suppose the service at https://app-service.staging-env.example.com
listens on port 8000. Let's call this service the page-rendering service.
The page-rendering service will handle and process the visitor's request, make any required external requests, and respond with an HTTP response.
To render the page for our visitor, the service may have to communicate with other services in the Kubernetes cluster or cloud-provided services, such as databases.
For example, the page-rendering service may connect to a Redis cache that runs as a service in your Release environment and to a PostgreSQL database hosted on Google Cloud SQL.
To connect to the Redis cache, Release provides the connection strings, such as hostname and password, to the page-rendering service as environment variables.
Connecting to other services within a single environment follows the same workflow as connecting to services in a single Kubernetes namespace. Services in a namespace are added to a Kubernetes network and can address other services directly by name.
In our example, the Redis cache service might be called redis
, so the page-rendering service will access Redis directly by connecting to the hostname redis
.
If our Cloud SQL database is in the same GCP project and shares the same VPC network as our cluster, no extra steps in GCP are required.
If, however, the database is in a different VPC, we need to set up VPC network peering in GCP. VPC network peering allows connections between two different VPCs without requiring public IP addresses for your database.
Release can also build and host static resources on Google Cloud Storage.
Requests for resources served from Google Cloud Storage follow the same DNS and load-balancing steps as dynamic requests, but instead of routing to a Kubernetes ingress controller, requests are routed to Cloud Storage.