LogoLogo
  • Welcome to Release
  • Getting started
    • Quickstart
    • Create an account
    • Prepare to use Release
    • Create an application
      • Create custom application
      • Create from template
      • Servers vs runnables
    • Create an environment
  • Guides and examples
    • Domains and DNS
      • Manage domains
      • DNS and nameservers
        • Configure GoDaddy
        • Configure Cloudflare
        • Configure Namecheap
        • Other DNS hosts
      • Routing traffic
    • Example applications
      • Full stack voting app
      • Flask and RDS counter app
      • Static site with Gatsby
      • Golang with Postgres and Nginx
      • WordPress with MySQL
      • Spring and PostgreSQL
      • Terraform and Flask
      • OpenTelemetry demo
      • Load balancer with hostname
      • Static JavaScript service
      • SSH bastion access to services
      • ngrok and OAuth for private tunnels
      • Using OAuth Proxy
      • Hybrid Docker and static site
      • App Imports: Connecting two applications
      • Example library
    • Running instances
      • Cron jobs
      • Jobs
      • Using Helm charts
      • Using terminal
      • Viewing logs
      • Troubleshooting
        • ImagePullBackoff error
        • CrashLoopBackoff error
        • Exit codes
        • OOM: out of memory
    • Advanced guides
      • Containers guide
      • Application guide
      • Kubernetes guide
      • Create a cluster
      • Upgrade a cluster
      • Managing node groups
      • Patch node groups
      • Hostnames and rules
      • Serve traffic on multiple ports
      • Configure access to your K8s cluster
      • Designing for multiple environments
      • Microservices architecture
      • Monitoring your clusters
      • Performance tuning
      • Visibility and monitoring
      • Working with data
        • Container-based data
        • Seeding and migration
        • Cloud-provided data
        • Golden images
        • Third party
      • Pausing Instant Datasets
        • Application pausing schedules
        • Pause/resume environments
      • Infrastructure as code
        • Terraform
  • Reference documentation
    • Account settings
      • Account info
      • Managing users
      • Build settings
        • Build arguments
        • Build SSH keys
      • Add integrations
      • View clusters and cloud integrations
      • Add datasets
      • Environment handles
    • Workflows in Release
      • Stages of workflows
      • Serial deployments
      • Parallel deployments
      • Rolling deployments
      • Rainbow deployments
    • Networking
      • Network architecture (AWS)
      • Network architecture (GCP)
      • Ingresses
      • IP addresses
      • Cloud-provided services
      • Third-party services
    • Release environment versioning
    • Application settings
      • Application Template
        • Schema definition
      • Default environment variables
      • GitHub
      • Pull requests
      • GitOps
      • Just-in-time file mounts
      • Primary App Link
      • Create application FAQ
      • App-level build arguments
      • Parameters
      • Workspaces
    • End-to-end testing
    • Environment settings
      • Environment configuration
      • Environment variables
        • Environment variable mappings
        • Secrets vaults
        • Using Secrets with GitOps
        • Kubernetes Secrets as environment variables
        • Managing legacy Release Secrets
    • Environment expiration
    • Environment presets
    • Instant datasets on AWS
    • Instant datasets on GCP
    • Instant dataset tasks
      • Tonic Cloud
      • Tonic On-Premise
    • Cloud resources
    • Static service deployment
    • Helm
      • Getting started
      • Version-controlled Helm charts
      • Open-source charts
      • Building Docker images
      • Ingress and networking
      • Configuration
    • GitOps
    • The .release.yaml file
    • Docker Compose conversion support
    • Reference examples
      • Adding and removing services
      • Managing service resources
      • Adding database containers to the Application Template
      • Stock Off-The-Shelf Examples
    • Release API
      • Account Authentication
      • Environments API
        • Create
        • Get
        • Setup
        • Patch
      • User Authentication
      • Environment Presets API
        • Get Environment Preset List
        • Get Environment Preset
        • Put Environment Preset
  • Background concepts
    • How Release works
  • Frequently asked questions
    • Release FAQ
    • AWS FAQ
    • Docker FAQ
    • JavaScript FAQ
  • Integrations
    • Integrations overview
      • Artifactory integration
      • Cloud integrations (AWS)
        • AWS guides
        • Grant access to AWS resources
        • AWS how to increase EIP quota
        • Control your EKS fleet with systems manager
        • Managing STS access
        • AWS Permissions Boundaries
        • Private ECR Repositories
        • Using an Existing AWS VPC
        • Using an Existing EKS Cluster
      • Docker Hub integration
      • LaunchDarkly integration
      • Private registries
      • Slack integration
      • Cloud integrations (GCP)
        • GCP Permissions Boundary
      • Datadog Agent
      • Doppler Secrets Manager
      • AWS Secrets Management
    • Source control integrations
      • GitHub
        • Pull request comments
        • Pull request labels
        • GitHub deployments
        • GitHub statuses
        • Remove GitHub integration
      • Bitbucket
      • GitLab
    • Monitoring and logging add-ons
      • Datadog
      • New Relic
      • ELK (Elasticsearch, Logstash, and Kibana)
  • Release Delivery
    • Create new customer integration
    • Delivery guide
    • Release to customer account access controls
    • Delivery FAQs
  • Release Instant Datasets
    • Introduction
    • Quickstart
    • Security
      • AWS Instant Dataset security
    • FAQ
    • API
  • CLI
    • Getting started
    • Installation
    • Configuration
    • CLI usage example
    • Remote development environments
    • Command reference
      • release accounts
        • release accounts list
        • release accounts select
      • release ai
        • release ai chat
        • release ai config-delete
        • release ai config-init
        • release ai config-select
        • release ai config-upsert
      • release apps
        • release apps list
        • release apps select
      • release auth
        • release auth login
        • release auth logout
      • release builds
        • release builds create
      • release clusters
        • release clusters exec
        • release clusters kubeconfig
        • release clusters shell
      • release datasets
        • release datasets list
        • release datasets refresh
      • release deploys
        • release deploys create
        • release deploys list
      • release development
        • release development logs
        • release development start
      • release environments
        • release environments config-get
        • release environments config-set
        • release environments create
        • release environments delete
        • release environments get
        • release environments list
        • release environments vars-get
      • release gitops
        • release gitops init
        • release gitops validate
      • release instances
        • release instances exec
        • release instances logs
        • release instances terminal
  • Release.ai
    • Release.ai Introduction
    • Getting Started
    • Release.ai Templates
    • Template Configuration Basics
    • Using GPU Resources
    • Custom Workflows
    • Fine Tuning LlamaX
    • Serving Inference
Powered by GitBook
On this page
  • Introduction
  • HTTP/S, and GRPC/s Services
  • Non-HTTP services

Was this helpful?

  1. Guides and examples
  2. Example applications

Load balancer with hostname

Exposing HTTP and non-HTTP-based services to the internet

PreviousOpenTelemetry demoNextStatic JavaScript service

Last updated 1 year ago

Was this helpful?

Introduction

Release will automatically handle services that are HTTP-based and generate hostnames for the backend services. But not all services are HTTP: many services listen for other types of traffic, like TCP or UDP traffic. There are also cases where you may want to avoid using a CDN or bypass the Nginx Ingress for even HTTP services. There are several cases where a TCP, UDP, or HTTP/GRPC service may not be suitable for regular load balancers and ingresses. In AWS, these specific load balancers are implemented with Network Load Balancers (NLB) and in GCP with the L4 Load Balancer.

In order to expose a service to the internet, you will need to define a node_port. A node_port requires both a target_port and a port number.

  • target_port: This is the port on the pod that the request gets sent to internally. Your application needs to be listening for network requests on this port for the service to work.

  • port: The specified port within the cluster that exposes the service externally. The service is visible on this port, and other pods and services will send requests to this port to reach the service. The load balancer directive will listen on this port and forward the requests to the target_port.

  • tls_enabled: Whether the load balancer will negotiate (and possibly offload) TLS encryption on the frontend.

  • backend_protocol should be set to either tcp for plaintext backend requests or tls for encrypted backend requests.

  • Release doesn't define a fixed NodePort in Kubernetes, which allows Kubernetes to allocate a random port from the host node. However, for external communications, the fixed port is applied to the load balancer. This allows services that run on the same port number to coexist with other applications on the same physical nodes without conflict.

Once you have defined a node_port for your service, you can define a hostname. Release uses variable interpolation for env_id and domain.

There are detailed instructions in the .

HTTP/S, and GRPC/s Services

Typically, you can expose HTTP services via the default ingress and CDN options in Release. However, several cases exist where these cannot be implemented well. For example, an HTTP service that requires extremely large payloads (10+MiB) for uploads or downloads may cause the CDN or NGINX ingress to timeout or reject the request. Another use case is an HTTP server that responds in minutes and could leave connections open for hours. Another example that is becoming increasingly common is a GRPC endpoint that is not compatible with typical HTTP-only load balancers.

HTTP example

In this case, you can specify an HTTP service load balancer with the following configuration. Of particular note, you will see that the node_port is mapped from the standard 80 to 3000 running in the container. The load balancer port will listen on 80 and send the request to the container running on port 3000. The annotations are given to show specific overrides that are available to users in either AWS or GCP.

  services:
    - backend:
      image: "..."
      ports:
        - type: node_port
          target_port: "3000"
          port: "80"
      loadbalancer:
        type: http
        visibility: private
        hostnames:
        - backend-${env_id}.${domain}
        - api-${env_id}.${domain}
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: TCP # AWS
          service.kubernetes.io/healthcheck: k8s2-pn2h9n5f-l4-shared-hc # GKE

HTTPS with offload example

In another case, you can specify an HTTPS TLS offload service. This allows the load balancer to listen on a TLS port with your custom certificate and pass the unecrypted traffic to the container. This section is identical to above but you will see tls_enabled: true and port: 443 are different.

services:
    - backend:
      image: "..."
      ports:
        - type: node_port
          target_port: "3000"
          port: "443"
      loadbalancer:
        type: http
        visibility: private
        tls_enabled: true
        hostnames:
        - backend-${env_id}.${domain}
        - api-${env_id}.${domain}

GRPCS end-to-end encrypted example

You may want to enable end-to-end encryption for secured communication by setting the backend_protocol: tls parameter.

services:
    - backend:
      image: "..."
      ports:
        - type: node_port
          target_port: "3000"
          port: "443"
      loadbalancer:
        type: grpc
        visibility: private
        tls_enabled: true
        backend_protocol: tls
        hostnames:
        - backend-${env_id}.${domain}
        - api-${env_id}.${domain}

Generic Layer7 Example

Lastly, you may want to take a more hands-off approach so the load balancer does not perform an ALPN or other HTTP negotiation on your behalf. Simply use the type: layer7 load balancer.

services:
    - backend:
      image: "..."
      ports:
        - type: node_port
          target_port: "3000"
          port: "80"
      loadbalancer:
        type: layer7
        visibility: private
        hostnames:
        - backend-${env_id}.${domain}
        - api-${env_id}.${domain}

Non-HTTP services

Here is an example of exposing a Minecraft service to the internet:

services:
- name: minecraft
  image: dustyspace/docker-minecraft-server/minecraft
  has_repo: true
  ports:
  - type: node_port
    target_port: '25565'
    port: '25565'
  loadbalancer:
    type: layer4
    visibility: public-direct
    hostname: minecraft-${env_id}-${domain}

The Minecraft service listens on port 25565, creates a load balancer, and generates a hostname.

The code below is an example of exposing a postgres database service accessible privately via your VPC (either with other services outside your cluster, or using VPC peering, or even available via a VPN tunnel into your account):

services:
- name: db
  image: postgres:9.4
  ports:
  - type: node_port
    target_port: '5432'
    port: '5432'
  loadbalancer:
    type: layer4
    visibility: private
    hostname: postgres-db-${env_id}-${domain}
reference documentation