Load balancer with hostname
Exposing HTTP and non-HTTP-based services to the internet
Introduction
Release will automatically handle services that are HTTP-based and generate hostnames for the backend services. But not all services are HTTP: many services listen for other types of traffic, like TCP or UDP traffic. There are also cases where you may want to avoid using a CDN or bypass the Nginx Ingress for even HTTP services. There are several cases where a TCP, UDP, or HTTP/GRPC service may not be suitable for regular load balancers and ingresses. In AWS, these specific load balancers are implemented with Network Load Balancers (NLB) and in GCP with the L4 Load Balancer.
In order to expose a service to the internet, you will need to define a node_port
. A node_port
requires both a target_port
and a port
number.
target_port
: This is the port on the pod that the request gets sent to internally. Your application needs to be listening for network requests on this port for the service to work.port
: The specified port within the cluster that exposes the service externally. The service is visible on this port, and other pods and services will send requests to this port to reach the service. The load balancer directive will listen on this port and forward the requests to thetarget_port
.tls_enabled
: Whether the load balancer will negotiate (and possibly offload) TLS encryption on the frontend.backend_protocol
should be set to eithertcp
for plaintext backend requests ortls
for encrypted backend requests.Release doesn't define a fixed
NodePort
in Kubernetes, which allows Kubernetes to allocate a random port from the host node. However, for external communications, the fixed port is applied to the load balancer. This allows services that run on the same port number to coexist with other applications on the same physical nodes without conflict.
Once you have defined a node_port
for your service, you can define a hostname
. Release uses variable interpolation for env_id
and domain
.
There are detailed instructions in the reference documentation.
HTTP/S, and GRPC/s Services
Typically, you can expose HTTP services via the default ingress and CDN options in Release. However, several cases exist where these cannot be implemented well. For example, an HTTP service that requires extremely large payloads (10+MiB) for uploads or downloads may cause the CDN or NGINX ingress to timeout or reject the request. Another use case is an HTTP server that responds in minutes and could leave connections open for hours. Another example that is becoming increasingly common is a GRPC endpoint that is not compatible with typical HTTP-only load balancers.
HTTP example
In this case, you can specify an HTTP service load balancer with the following configuration. Of particular note, you will see that the node_port
is mapped from the standard 80 to 3000 running in the container. The load balancer port will listen on 80 and send the request to the container running on port 3000. The annotations are given to show specific overrides that are available to users in either AWS or GCP.
HTTPS with offload example
In another case, you can specify an HTTPS TLS offload service. This allows the load balancer to listen on a TLS port with your custom certificate and pass the unecrypted traffic to the container. This section is identical to above but you will see tls_enabled: true
and port: 443
are different.
GRPCS end-to-end encrypted example
You may want to enable end-to-end encryption for secured communication by setting the backend_protocol: tls
parameter.
Generic Layer7 Example
Lastly, you may want to take a more hands-off approach so the load balancer does not perform an ALPN or other HTTP negotiation on your behalf. Simply use the type: layer7
load balancer.
Non-HTTP services
Here is an example of exposing a Minecraft service to the internet:
The Minecraft service listens on port 25565, creates a load balancer, and generates a hostname.
The code below is an example of exposing a postgres database service accessible privately via your VPC (either with other services outside your cluster, or using VPC peering, or even available via a VPN tunnel into your account):
Last updated