LogoLogo
  • Welcome to Release
  • Getting started
    • Quickstart
    • Create an account
    • Prepare to use Release
    • Create an application
      • Create custom application
      • Create from template
      • Servers vs runnables
    • Create an environment
  • Guides and examples
    • Domains and DNS
      • Manage domains
      • DNS and nameservers
        • Configure GoDaddy
        • Configure Cloudflare
        • Configure Namecheap
        • Other DNS hosts
      • Routing traffic
    • Example applications
      • Full stack voting app
      • Flask and RDS counter app
      • Static site with Gatsby
      • Golang with Postgres and Nginx
      • WordPress with MySQL
      • Spring and PostgreSQL
      • Terraform and Flask
      • OpenTelemetry demo
      • Load balancer with hostname
      • Static JavaScript service
      • SSH bastion access to services
      • ngrok and OAuth for private tunnels
      • Using OAuth Proxy
      • Hybrid Docker and static site
      • App Imports: Connecting two applications
      • Example library
    • Running instances
      • Cron jobs
      • Jobs
      • Using Helm charts
      • Using terminal
      • Viewing logs
      • Troubleshooting
        • ImagePullBackoff error
        • CrashLoopBackoff error
        • Exit codes
        • OOM: out of memory
    • Advanced guides
      • Containers guide
      • Application guide
      • Kubernetes guide
      • Create a cluster
      • Upgrade a cluster
      • Managing node groups
      • Patch node groups
      • Hostnames and rules
      • Serve traffic on multiple ports
      • Configure access to your K8s cluster
      • Designing for multiple environments
      • Microservices architecture
      • Monitoring your clusters
      • Performance tuning
      • Visibility and monitoring
      • Working with data
        • Container-based data
        • Seeding and migration
        • Cloud-provided data
        • Golden images
        • Third party
      • Pausing Instant Datasets
        • Application pausing schedules
        • Pause/resume environments
      • Infrastructure as code
        • Terraform
  • Reference documentation
    • Account settings
      • Account info
      • Managing users
      • Build settings
        • Build arguments
        • Build SSH keys
      • Add integrations
      • View clusters and cloud integrations
      • Add datasets
      • Environment handles
    • Workflows in Release
      • Stages of workflows
      • Serial deployments
      • Parallel deployments
      • Rolling deployments
      • Rainbow deployments
    • Networking
      • Network architecture (AWS)
      • Network architecture (GCP)
      • Ingresses
      • IP addresses
      • Cloud-provided services
      • Third-party services
    • Release environment versioning
    • Application settings
      • Application Template
        • Schema definition
      • Default environment variables
      • GitHub
      • Pull requests
      • GitOps
      • Just-in-time file mounts
      • Primary App Link
      • Create application FAQ
      • App-level build arguments
      • Parameters
      • Workspaces
    • End-to-end testing
    • Environment settings
      • Environment configuration
      • Environment variables
        • Environment variable mappings
        • Secrets vaults
        • Using Secrets with GitOps
        • Kubernetes Secrets as environment variables
        • Managing legacy Release Secrets
    • Environment expiration
    • Environment presets
    • Instant datasets on AWS
    • Instant datasets on GCP
    • Instant dataset tasks
      • Tonic Cloud
      • Tonic On-Premise
    • Cloud resources
    • Static service deployment
    • Helm
      • Getting started
      • Version-controlled Helm charts
      • Open-source charts
      • Building Docker images
      • Ingress and networking
      • Configuration
    • GitOps
    • The .release.yaml file
    • Docker Compose conversion support
    • Reference examples
      • Adding and removing services
      • Managing service resources
      • Adding database containers to the Application Template
      • Stock Off-The-Shelf Examples
    • Release API
      • Account Authentication
      • Environments API
        • Create
        • Get
        • Setup
        • Patch
      • User Authentication
      • Environment Presets API
        • Get Environment Preset List
        • Get Environment Preset
        • Put Environment Preset
  • Background concepts
    • How Release works
  • Frequently asked questions
    • Release FAQ
    • AWS FAQ
    • Docker FAQ
    • JavaScript FAQ
  • Integrations
    • Integrations overview
      • Artifactory integration
      • Cloud integrations (AWS)
        • AWS guides
        • Grant access to AWS resources
        • AWS how to increase EIP quota
        • Control your EKS fleet with systems manager
        • Managing STS access
        • AWS Permissions Boundaries
        • Private ECR Repositories
        • Using an Existing AWS VPC
        • Using an Existing EKS Cluster
      • Docker Hub integration
      • LaunchDarkly integration
      • Private registries
      • Slack integration
      • Cloud integrations (GCP)
        • GCP Permissions Boundary
      • Datadog Agent
      • Doppler Secrets Manager
      • AWS Secrets Management
    • Source control integrations
      • GitHub
        • Pull request comments
        • Pull request labels
        • GitHub deployments
        • GitHub statuses
        • Remove GitHub integration
      • Bitbucket
      • GitLab
    • Monitoring and logging add-ons
      • Datadog
      • New Relic
      • ELK (Elasticsearch, Logstash, and Kibana)
  • Release Delivery
    • Create new customer integration
    • Delivery guide
    • Release to customer account access controls
    • Delivery FAQs
  • Release Instant Datasets
    • Introduction
    • Quickstart
    • Security
      • AWS Instant Dataset security
    • FAQ
    • API
  • CLI
    • Getting started
    • Installation
    • Configuration
    • CLI usage example
    • Remote development environments
    • Command reference
      • release accounts
        • release accounts list
        • release accounts select
      • release ai
        • release ai chat
        • release ai config-delete
        • release ai config-init
        • release ai config-select
        • release ai config-upsert
      • release apps
        • release apps list
        • release apps select
      • release auth
        • release auth login
        • release auth logout
      • release builds
        • release builds create
      • release clusters
        • release clusters exec
        • release clusters kubeconfig
        • release clusters shell
      • release datasets
        • release datasets list
        • release datasets refresh
      • release deploys
        • release deploys create
        • release deploys list
      • release development
        • release development logs
        • release development start
      • release environments
        • release environments config-get
        • release environments config-set
        • release environments create
        • release environments delete
        • release environments get
        • release environments list
        • release environments vars-get
      • release gitops
        • release gitops init
        • release gitops validate
      • release instances
        • release instances exec
        • release instances logs
        • release instances terminal
  • Release.ai
    • Release.ai Introduction
    • Getting Started
    • Release.ai Templates
    • Template Configuration Basics
    • Using GPU Resources
    • Custom Workflows
    • Fine Tuning LlamaX
    • Serving Inference
Powered by GitBook
On this page
  • High-level network overview
  • Following a network request to a dynamic service
  • 1. DNS resolution
  • 2. Load balancing and URL maps
  • 3. Backend services
  • 4. VPC network firewall
  • 5. Kubernetes ingress
  • 6. Request processing and response
  • 7. Kubernetes networking
  • 8. Connecting to cloud-hosted databases
  • Network requests for static files

Was this helpful?

  1. Reference documentation
  2. Networking

Network architecture (GCP)

PreviousNetwork architecture (AWS)NextIngresses

Last updated 2 years ago

Was this helpful?

When you create environments, Release sets up network access from the internet to services in your environments. For environments hosted on GCP, connectivity is handled through GCP's networking services.

This is an overview of how Release sets up GCP to allow visitors to access your applications.

High-level network overview

Release configures the following GCP services to direct traffic to your applications:

  • Cloud DNS points subdomains at load-balancer frontend IP addresses so that visitors' web browsers know which endpoints to connect to.

  • Google Cloud Load Balancer acts as a Layer 7 load balancer that terminates visitors' HTTPS requests and routes traffic to backends based on URL maps.

  • URL maps contain a set of rules against which incoming requests are matched so that requests are routed to the appropriate backends.

  • Load-balancer backends connect the load balancer to network endpoint groups for your applications.

  • Pods in Kubernetes clusters connect to the default project-wide virtual private cloud (VPC) network for your Google Cloud project.

  • VPC firewall rules allow traffic from the external load balancer to IP addresses in your VPC.

  • Google Kubernetes Engine connects incoming requests to pods within your Kubernetes clusters through an Nginx proxy.

When a visitor opens a URL for one of your services in a web browser, the visitor's browser sends a DNS query for a subdomain in your zone in Google Cloud DNS. A DNS resolver returns the IP address for a GCP load-balancer frontend.

The visitor's browser then connects to the load balancer via HTTPS, and sends a request.

The load balancer forwards this request on to Kubernetes, which serves a response from one of your applications.

We'll use an example to see how GCP networking routes traffic.

Following a network request to a dynamic service

Our visitor wants to connect to an application running in our Release environment. Let's follow their request to explore Release's network configuration of GCP in more detail.

In a Release environment, each service that is accessible from the internet has a unique URL. You can find the URL for a service by visiting the Environment Details page in Release, as shown below.

To start the request, our visitor clicks on a link or enters the URL in their browser, for example, https://app-service.staging-env.example.com.

1. DNS resolution

The visitor's web browser queries their DNS resolver for the hostname app-service.staging-env.example.com. The visitor's DNS resolver, through a series of DNS servers, queries one of GCP's authoritative name servers for the hostname.

Google Cloud DNS responds to the visitor's DNS resolver with the IP address.

2. Load balancing and URL maps

The visitor's browser completes a TLS handshake with Google's load-balancer frontend, so that all further traffic between the browser and the load balancer is encrypted.

The domain-verified SSL certificate used for this connection is created in advance by Release by verifying the domain with a certificate authority via DNS-based verification.

After the connection is secured, the browser sends the request:

GET / HTTP/2
Host: app-service.staging-env.example.com

The load balancer now matches the request against a URL map, which associates the request with the appropriate load-balancer backend.

In our app's load balancer, the URL map would look like this simplified example:

{
  "hosts": [
    "app-service.staging-env.example.com",
    "*.app-service.staging-env.example.com"
  ],
  "paths": [
    "/",
    "/*"
  ],
  "backend": "release-kn0rp0iie0hndklwzmg9f97ukyz4cd7o"
}

Since the host and path in our visitor's request both match this URL map, the load balancer directs the visitor's traffic to the load-balancer backend called release-kne0hndklwzmg90rp0iif97ukyz4cd7o.

If for some reason, a request does not match any URL map rules, Release will forward the request to https://none-such.releasehub.com.

3. Backend services

A load-balancer backend in GCP is responsible for connecting the load balancer to network endpoints. In our example, the network endpoint is a pod in our Kubernetes cluster.

Release configures a network endpoint group (NEG) per network zone (for example, us-west-1-a, us-west-1-b, and us-west-1-c) for each environment and adds the internal VPC network IP addresses for servers running in these environments to each group.

4. VPC network firewall

Because the endpoints in our clusters are connected to a private network, the project-wide default VPC, our backend service needs to pass through the VPC firewall.

The firewall also needs to allow health checks from the load balancer.

Release configures VPC firewall rules to accept traffic from Google's load balancing IP ranges: 130.211.0.0/22 and 35.191.0.0/16.

5. Kubernetes ingress

Once the visitor's request reaches our Kubernetes cluster, a Kubernetes Nginx ingress controller routes the request to the correct pod to handle the request.

6. Request processing and response

For this example, suppose the service at https://app-service.staging-env.example.com listens on port 8000. Let's call this service the page-rendering service.

The page-rendering service will handle and process the visitor's request, make any required external requests, and respond with an HTTP response.

To render the page for our visitor, the service may have to communicate with other services in the Kubernetes cluster or cloud-provided services, such as databases.

For example, the page-rendering service may connect to a Redis cache that runs as a service in your Release environment and to a PostgreSQL database hosted on Google Cloud SQL.

7. Kubernetes networking

To connect to the Redis cache, Release provides the connection strings, such as hostname and password, to the page-rendering service as environment variables.

Connecting to other services within a single environment follows the same workflow as connecting to services in a single Kubernetes namespace. Services in a namespace are added to a Kubernetes network and can address other services directly by name.

In our example, the Redis cache service might be called redis, so the page-rendering service will access Redis directly by connecting to the hostname redis.

8. Connecting to cloud-hosted databases

If our Cloud SQL database is in the same GCP project and shares the same VPC network as our cluster, no extra steps in GCP are required.

Network requests for static files

Release can also build and host static resources on Google Cloud Storage.

Requests for resources served from Google Cloud Storage follow the same DNS and load-balancing steps as dynamic requests, but instead of routing to a Kubernetes ingress controller, requests are routed to Cloud Storage.

How Release configures this DNS record: When we created our environment, Release added an A record associating app-service.staging-env.example.com with the IP address of a load-balancer frontend, for example, 216.239.32.108, to . The hostname used is based on the configured for our application.

How Release configures this URL map: When we created our environment, Release created a associating the service's hostname and paths with a load-balancer backend, and added the URL map to the .

GCP's load balancer now directs the visitor's traffic to a .

GCP simplifies connecting to multiple endpoints at once by adding endpoints to .

If you configure for your applications, the backend service keeps track of affinity at the network-zone level. The backend service uses health checks to keep track of healthy endpoints that can handle requests.

If, however, the database is in a different VPC, we need to set up in GCP. VPC network peering allows connections between two different VPCs without requiring public IP addresses for your database.

URL map
load balancer
backend service
network endpoint groups
session affinity
VPC network peering
Google Cloud DNS
hostname templates
GCP network diagram
Find a service URL in Release