Load balancer with hostname
Exposing HTTP and non-HTTP-based services to the internet
Release will automatically handle services that are HTTP-based and generate hostnames for the backend services. But not all services are HTTP: many services listen for other types of traffic, like TCP or UDP traffic. There are also cases where you may want to avoid using a CDN or bypass the Nginx Ingress for even HTTP services. There are several cases where a TCP, UDP, or HTTP/GRPC service may not be suitable for regular load balancers and ingresses. In AWS, these specific load balancers are implemented with Network Load Balancers (NLB) and in GCP with the L4 Load Balancer.
In order to expose a service to the internet, you will need to define a
node_port
. A node_port
requires both a target_port
and a port
number.target_port
: This is the port on the pod that the request gets sent to internally. Your application needs to be listening for network requests on this port for the service to work.port
: The specified port within the cluster that exposes the service externally. The service is visible on this port, and other pods and services will send requests to this port to reach the service. The load balancer directive will listen on this port and forward the requests to thetarget_port
.tls_enabled
: Whether the load balancer will negotiate (and possibly offload) TLS encryption on the frontend.backend_protocol
should be set to eithertcp
for plaintext backend requests ortls
for encrypted backend requests.- Release doesn't define a fixed
NodePort
in Kubernetes, which allows Kubernetes to allocate a random port from the host node. However, for external communications, the fixed port is applied to the load balancer. This allows services that run on the same port number to coexist with other applications on the same physical nodes without conflict.
Once you have defined a
node_port
for your service, you can define a hostname
. Release uses variable interpolation for env_id
and domain
.Typically, you can expose HTTP services via the default ingress and CDN options in Release. However, several cases exist where these cannot be implemented well. For example, an HTTP service that requires extremely large payloads (10+MiB) for uploads or downloads may cause the CDN or NGINX ingress to timeout or reject the request. Another use case is an HTTP server that responds in minutes and could leave connections open for hours. Another example that is becoming increasingly common is a GRPC endpoint that is not compatible with typical HTTP-only load balancers.
In this case, you can specify an HTTP service load balancer with the following configuration. Of particular note, you will see that the
node_port
is mapped from the standard 80 to 3000 running in the container. The load balancer port will listen on 80 and send the request to the container running on port 3000. The annotations are given to show specific overrides that are available to users in either AWS or GCP. services:
- backend:
image: "..."
ports:
- type: node_port
target_port: "3000"
port: "80"
loadbalancer:
type: http
visibility: private
hostnames:
- backend-${env_id}.${domain}
- api-${env_id}.${domain}
annotations:
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: TCP # AWS
service.kubernetes.io/healthcheck: k8s2-pn2h9n5f-l4-shared-hc # GKE
In another case, you can specify an HTTPS TLS offload service. This allows the load balancer to listen on a TLS port with your custom certificate and pass the unecrypted traffic to the container. This section is identical to above but you will see
tls_enabled: true
and port: 443
are different.services:
- backend:
image: "..."
ports:
- type: node_port
target_port: "3000"
port: "443"
loadbalancer:
type: http
visibility: private
tls_enabled: true
hostnames:
- backend-${env_id}.${domain}
- api-${env_id}.${domain}
You may want to enable end-to-end encryption for secured communication by setting the
backend_protocol: tls
parameter.services:
- backend:
image: "..."
ports:
- type: node_port
target_port: "3000"
port: "443"
loadbalancer:
type: grpc
visibility: private
tls_enabled: true
backend_protocol: tls
hostnames:
- backend-${env_id}.${domain}
- api-${env_id}.${domain}
Lastly, you may want to take a more hands-off approach so the load balancer does not perform an ALPN or other HTTP negotiation on your behalf. Simply use the
type: layer7
load balancer.services:
- backend:
image: "..."
ports:
- type: node_port
target_port: "3000"
port: "80"
loadbalancer:
type: layer7
visibility: private
hostnames:
- backend-${env_id}.${domain}
- api-${env_id}.${domain}
Here is an example of exposing a Minecraft service to the internet:
services:
- name: minecraft
image: dustyspace/docker-minecraft-server/minecraft
has_repo: true
ports:
- type: node_port
target_port: '25565'
port: '25565'
loadbalancer:
type: layer4
visibility: public-direct
hostname: minecraft-${env_id}-${domain}
The Minecraft service listens on port 25565, creates a load balancer, and generates a hostname.
The code below is an example of exposing a postgres database service accessible privately via your VPC (either with other services outside your cluster, or using VPC peering, or even available via a VPN tunnel into your account):
services:
- name: db
image: postgres:9.4
ports:
- type: node_port
target_port: '5432'
port: '5432'
loadbalancer:
type: layer4
visibility: private
hostname: postgres-db-${env_id}-${domain}
Last modified 1mo ago