LogoLogo
  • Welcome to Release
  • Getting started
    • Quickstart
    • Create an account
    • Prepare to use Release
    • Create an application
      • Create custom application
      • Create from template
      • Servers vs runnables
    • Create an environment
  • Guides and examples
    • Domains and DNS
      • Manage domains
      • DNS and nameservers
        • Configure GoDaddy
        • Configure Cloudflare
        • Configure Namecheap
        • Other DNS hosts
      • Routing traffic
    • Example applications
      • Full stack voting app
      • Flask and RDS counter app
      • Static site with Gatsby
      • Golang with Postgres and Nginx
      • WordPress with MySQL
      • Spring and PostgreSQL
      • Terraform and Flask
      • OpenTelemetry demo
      • Load balancer with hostname
      • Static JavaScript service
      • SSH bastion access to services
      • ngrok and OAuth for private tunnels
      • Using OAuth Proxy
      • Hybrid Docker and static site
      • App Imports: Connecting two applications
      • Example library
    • Running instances
      • Cron jobs
      • Jobs
      • Using Helm charts
      • Using terminal
      • Viewing logs
      • Troubleshooting
        • ImagePullBackoff error
        • CrashLoopBackoff error
        • Exit codes
        • OOM: out of memory
    • Advanced guides
      • Containers guide
      • Application guide
      • Kubernetes guide
      • Create a cluster
      • Upgrade a cluster
      • Managing node groups
      • Patch node groups
      • Hostnames and rules
      • Serve traffic on multiple ports
      • Configure access to your K8s cluster
      • Designing for multiple environments
      • Microservices architecture
      • Monitoring your clusters
      • Performance tuning
      • Visibility and monitoring
      • Working with data
        • Container-based data
        • Seeding and migration
        • Cloud-provided data
        • Golden images
        • Third party
      • Pausing Instant Datasets
        • Application pausing schedules
        • Pause/resume environments
      • Infrastructure as code
        • Terraform
  • Reference documentation
    • Account settings
      • Account info
      • Managing users
      • Build settings
        • Build arguments
        • Build SSH keys
      • Add integrations
      • View clusters and cloud integrations
      • Add datasets
      • Environment handles
    • Workflows in Release
      • Stages of workflows
      • Serial deployments
      • Parallel deployments
      • Rolling deployments
      • Rainbow deployments
    • Networking
      • Network architecture (AWS)
      • Network architecture (GCP)
      • Ingresses
      • IP addresses
      • Cloud-provided services
      • Third-party services
    • Release environment versioning
    • Application settings
      • Application Template
        • Schema definition
      • Default environment variables
      • GitHub
      • Pull requests
      • GitOps
      • Just-in-time file mounts
      • Primary App Link
      • Create application FAQ
      • App-level build arguments
      • Parameters
      • Workspaces
    • End-to-end testing
    • Environment settings
      • Environment configuration
      • Environment variables
        • Environment variable mappings
        • Secrets vaults
        • Using Secrets with GitOps
        • Kubernetes Secrets as environment variables
        • Managing legacy Release Secrets
    • Environment expiration
    • Environment presets
    • Instant datasets on AWS
    • Instant datasets on GCP
    • Instant dataset tasks
      • Tonic Cloud
      • Tonic On-Premise
    • Cloud resources
    • Static service deployment
    • Helm
      • Getting started
      • Version-controlled Helm charts
      • Open-source charts
      • Building Docker images
      • Ingress and networking
      • Configuration
    • GitOps
    • The .release.yaml file
    • Docker Compose conversion support
    • Reference examples
      • Adding and removing services
      • Managing service resources
      • Adding database containers to the Application Template
      • Stock Off-The-Shelf Examples
    • Release API
      • Account Authentication
      • Environments API
        • Create
        • Get
        • Setup
        • Patch
      • User Authentication
      • Environment Presets API
        • Get Environment Preset List
        • Get Environment Preset
        • Put Environment Preset
  • Background concepts
    • How Release works
  • Frequently asked questions
    • Release FAQ
    • AWS FAQ
    • Docker FAQ
    • JavaScript FAQ
  • Integrations
    • Integrations overview
      • Artifactory integration
      • Cloud integrations (AWS)
        • AWS guides
        • Grant access to AWS resources
        • AWS how to increase EIP quota
        • Control your EKS fleet with systems manager
        • Managing STS access
        • AWS Permissions Boundaries
        • Private ECR Repositories
        • Using an Existing AWS VPC
        • Using an Existing EKS Cluster
      • Docker Hub integration
      • LaunchDarkly integration
      • Private registries
      • Slack integration
      • Cloud integrations (GCP)
        • GCP Permissions Boundary
      • Datadog Agent
      • Doppler Secrets Manager
      • AWS Secrets Management
    • Source control integrations
      • GitHub
        • Pull request comments
        • Pull request labels
        • GitHub deployments
        • GitHub statuses
        • Remove GitHub integration
      • Bitbucket
      • GitLab
    • Monitoring and logging add-ons
      • Datadog
      • New Relic
      • ELK (Elasticsearch, Logstash, and Kibana)
  • Release Delivery
    • Create new customer integration
    • Delivery guide
    • Release to customer account access controls
    • Delivery FAQs
  • Release Instant Datasets
    • Introduction
    • Quickstart
    • Security
      • AWS Instant Dataset security
    • FAQ
    • API
  • CLI
    • Getting started
    • Installation
    • Configuration
    • CLI usage example
    • Remote development environments
    • Command reference
      • release accounts
        • release accounts list
        • release accounts select
      • release ai
        • release ai chat
        • release ai config-delete
        • release ai config-init
        • release ai config-select
        • release ai config-upsert
      • release apps
        • release apps list
        • release apps select
      • release auth
        • release auth login
        • release auth logout
      • release builds
        • release builds create
      • release clusters
        • release clusters exec
        • release clusters kubeconfig
        • release clusters shell
      • release datasets
        • release datasets list
        • release datasets refresh
      • release deploys
        • release deploys create
        • release deploys list
      • release development
        • release development logs
        • release development start
      • release environments
        • release environments config-get
        • release environments config-set
        • release environments create
        • release environments delete
        • release environments get
        • release environments list
        • release environments vars-get
      • release gitops
        • release gitops init
        • release gitops validate
      • release instances
        • release instances exec
        • release instances logs
        • release instances terminal
  • Release.ai
    • Release.ai Introduction
    • Getting Started
    • Release.ai Templates
    • Template Configuration Basics
    • Using GPU Resources
    • Custom Workflows
    • Fine Tuning LlamaX
    • Serving Inference
Powered by GitBook
On this page
  • Prerequisites
  • Networking
  • Subnet tags
  • Add-ons and Helm charts
  • Allow Release access via ConfigMap
  • Import the existing cluster to Release
  • Test and verify access

Was this helpful?

  1. Integrations
  2. Integrations overview
  3. Cloud integrations (AWS)

Using an Existing EKS Cluster

PreviousUsing an Existing AWS VPCNextDocker Hub integration

Last updated 10 months ago

Was this helpful?

By default, Release creates a new dedicated cluster from scratch when you create a new EKS cluster. However, you can create an EKS cluster using an existing EKS cluster, which is useful if you have specific requirements for your clusters, like specific security policies, internal processes and best practices, or existing infrastructure-as-code configurations.

To successfully use an existing EKS cluster with Release, take note of the requirements.

Prerequisites

Take a look at the [eksctl documentation on non eksctl-created clusters] to understand the requirements and get a sense of what is supported.

Follow the instructions in the documentation to give Release access to your cluster.

In summary, you need:

  • An existing EKS cluster and node groups with associated VPC, routing requirements, and subnets.

  • An updated aws-auth ConfigMap giving Release access to your Kubernetes cluster.

  • Supporting add-ons or Helm charts – listed below.

Networking

If you have an existing EKS cluster, you should already have all the networking, routing, and security settings configured correctly.

The following table outlines the minimum configuration Release requires for your existing EKS cluster to ensure workloads deployed to it operate correctly.

AWS Resource
Min
Recommended
Notes

VPC CIDR mask

/19

/16

A VPC should be created with a CIDR block big enough to support the workloads and services deployed to it. We recommend using a /16 size for all workload types.

Private subnets

3

3

At least three subnets in three Availability Zones.

Public subnets

3

3

At least three subnets in three Availability Zones.

Internet gateway

1

1

Pods and workloads need to be able to access the internet. Creating a completely isolated private cluster is not tested at this time.

NAT gateway

1

1

Only one NAT gateway with an EIP is recommended; having more is supported if you require high availability for your production workloads.

Node type

t3a.large

t3a.xlarge preprod, m5a.2xlarge prod

Instance sizes are highly dependent on your workload count and type.

Node count

3

>3

Spot Instances

0

0

You can enable spot workloads in a separate node group. We recommend using at least one non-spot workload node group for cluster support services.

Subnet tags

This section is optional if you used AWS-native tools like eksctl or the AWS Console.

You must apply tags to each subnet when you create a cluster to tell Release what the subnet's function is (public or private) and which subnets to deploy to.

Be sure the name of the cluster you create matches the variable <cluster_name> wherever it is used.

You need at least two private subnets. Although public subnets are optional, you need at least two if you have any.

Note that tags for private and public subnets are different.

Tag Key
Tag Value
Example
Notes

kubernetes.io/cluster/<cluster_name>

shared

kubernetes.io/cluster/production: "shared"

In our testing, it didn't matter whether using "owned" or "shared".

kubernetes.io/role/internal-elb

1

kubernetes.io/role/internal-elb: "1"

This tag should only be applied to each PRIVATE subnet.

kubernetes.io/role/elb

1

kubernetes.io/role/elb: "1"

This tag should only be applied to each PUBLIC subnet.

Add-ons and Helm charts

Release requires several add-ons and Helm charts to function properly, please go through the following table to identify if anything needs to be added to your cluster to function properly with Release. If you have any questions or concerns, feel free to reach out to us.

Add-on or Helm Chart
Required?
Min. version
Notes

CoreDNS

yes

latest

CoreDNS provides intra- and inter-cluster DNS lookups.

Amazon VPC CNI

yes

latest

Required to connect to Amazon networking resources.

Amazon EBS CSI

yes

latest

Allows pods and workloads to mount EBS volumes; not required if you never use storage volumes.

OIDC provider

yes

N/A

The OIDC provider can be added at cluster-create time or later; it is recommended you take advantage of service accounts and IAM roles in your cluster.

External DNS

yes

latest

External DNS is a Helm chart that allows Release to add external or internal DNS entries for services and load balancers. Highly recommended but not necessary if you never use Route53 DNS.

Datadog

no

latest

Highly recommended to capture logs and metrics. Release offers a Datadog integration but unmanaged clusters are not included with our integration unless you request it.

Allow Release access via ConfigMap

Release uses a "Console Role" to access your cloud resources and identify itself to access your EKS cluster. You will need to get the console role ARN from the Cloudwatch resources output. It will look something like releasehub-integration-ConsoleRole-XYZ in AWS IAM.

To access your cluster from the control plane, Release needs the Amazon Resource Name (ARN) for its role.

To find the ARN, go to the AWS Cloudformation template called release-integration. In the Resources tab, find the link to the ConsoleRole and follow it to AWS IAM. Copy the ARN, as you'll need it in the next steps.

For a cluster named my-cluster in region-code, the role ARN might look something like this: arn:aws:iam::111122223333:role/release/release-integration-ConsoleRole-xxxx.

To use the ARN to grant Release permission to deploy to your cluster, you need to remove the /release path.

Remember: The ARN MUST NOT have any path prefixes after the role. For example, arn:aws:iam:xyz:role/my/long/path/role-name needs to be shortened to arn:aws:iam:xyz:role/role-name. You also cannot use an STS or assumed role; you must use the original role. For example, you cannot use arn:aws:sts:assume-role/role-name/role-session.

We can now grant permission to Release to deploy to your cluster by adding an entry to its aws-auth ConfigMap using the shortened ARN:

eksctl create iamidentitymapping --cluster my-cluster --region region-code \
    --arn arn:aws:iam::111122223333:role/release-integration-ConsoleRole-xxxx \
    --username admin --group system:masters \
    --no-duplicate-arns

Please note that this example creates a role that has administrative access to your cluster. This is usually acceptable because Release needs to perform administrative actions in the cluster, like creating namespaces or installing Helm charts. If you need to restrict the role, let us know what level of permissions you would like to restrict and we can work with you to verify the permissions will work with our deployment processes.

Import the existing cluster to Release

Now that the cluster information is readily available and Release has been granted access, you will be able to import the cluster as you can see in the dialog box:

Name of Field
Required Information
Example
Notes

Cloud Provider Integration

The cloud integration tied to your AWS account

production

This is a drop-down that you cannot edit, so you must create it beforehand and make sure it is attached to the same account with the existing resources.

Region

The region where the existing resources exist

us-east-1

This is a drop-down and must match the region where the existing resources are created.

Cluster Name

The name of the existing cluster

prod-cluster

This is a drop-down and must match the name of your cluster.

Domain

The name of the subdomain to use

release.example.com

This domain must be created as part of the cluster prerequisites or choose a Release-supplied domain name.

Test and verify access

View the configuration of your cluster with the following command:

kubectl describe -n kube-system configmap/aws-auth

It will look something like this:

apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:masters
      rolearn: arn:aws:iam::1111222233333:role/release-integration-ConsoleRole-xxxx
      username: admin
  mapUsers: |
    []
kind: configMap

You can also navigate to Settings -> Clusters -> Cluster and click the Verify Cluster button. If the cluster status remains "Pending" (or worse, "Errored"), verify the configurations and settings described above. If you get no errors, your cluster is ready to go!

We recommend using a minimum of three and a maximum of ten nodes, with a steady state of three or four. (Autoscaling is supported if you install the .)

The following documentation walks you through the same steps as outlined in the documentation. Read that document and refer to it as we go along here.

AWS EKS authentication
AWS EKS authentication
cluster autoscaler
Show the cloudformation resources
Use the Import Cluster button
Fill in the details to import your existing cluster
Use the Verify Cluster button to test connectivity to your cluster